Test Report: KVM_Linux_crio 17731

                    
                      2299ceaec17b686deec86f12c40bdefcf1fe6842:2023-12-05:32161
                    
                

Test fail (30/301)

Order failed test Duration
35 TestAddons/parallel/Ingress 155.81
36 TestAddons/parallel/InspektorGadget 482.71
48 TestAddons/StoppedEnableDisable 155.28
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.88
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 166.68
212 TestMultiNode/serial/PingHostFrom2Pods 3.25
219 TestMultiNode/serial/RestartKeepsNodes 691.05
221 TestMultiNode/serial/StopMultiNode 143.2
228 TestPreload 279.84
234 TestRunningBinaryUpgrade 147.57
242 TestStoppedBinaryUpgrade/Upgrade 288.83
270 TestPause/serial/SecondStartNoReconfiguration 105.54
280 TestStartStop/group/embed-certs/serial/Stop 140.26
283 TestStartStop/group/old-k8s-version/serial/Stop 139.52
286 TestStartStop/group/no-preload/serial/Stop 140.07
289 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.41
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.6
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
300 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.29
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.22
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.24
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.2
304 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.73
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 289.06
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 272.13
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 210.08
314 TestStartStop/group/newest-cni/serial/Stop 140.37
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.42
x
+
TestAddons/parallel/Ingress (155.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-489440 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-489440 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-489440 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013028553s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-489440 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.261459445s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-489440 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.118
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-489440 addons disable ingress-dns --alsologtostderr -v=1: (1.867522383s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-489440 addons disable ingress --alsologtostderr -v=1: (7.861995832s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-489440 -n addons-489440
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-489440 logs -n 25: (1.434725387s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |                     |
	|         | -p download-only-103789                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | -p download-only-103789                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-103789                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-103789                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-109311 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | binary-mirror-109311                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35295                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-109311                                                                     | binary-mirror-109311 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-489440                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-489440                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-489440 --wait=true                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489440 addons                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489440 ssh cat                                                                       | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | /opt/local-path-provisioner/pvc-fb2b2dea-9f18-4d7a-86cd-fd40e7f776f4_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | -p addons-489440                                                                            |                      |         |         |                     |                     |
	| ip      | addons-489440 ip                                                                            | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | addons-489440                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | -p addons-489440                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489440 ssh curl -s                                                                   | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489440 addons                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-489440 addons                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-489440 ip                                                                            | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:14.159744   13818 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:14.159863   13818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:14.159869   13818 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:14.159876   13818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:14.160054   13818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 19:35:14.160709   13818 out.go:303] Setting JSON to false
	I1205 19:35:14.161493   13818 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1067,"bootTime":1701803847,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:14.161550   13818 start.go:138] virtualization: kvm guest
	I1205 19:35:14.163788   13818 out.go:177] * [addons-489440] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:14.166078   13818 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:35:14.166040   13818 notify.go:220] Checking for updates...
	I1205 19:35:14.167528   13818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:14.169231   13818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:35:14.170870   13818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:14.172298   13818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:35:14.173697   13818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:14.175234   13818 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:35:14.206373   13818 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:35:14.207768   13818 start.go:298] selected driver: kvm2
	I1205 19:35:14.207784   13818 start.go:902] validating driver "kvm2" against <nil>
	I1205 19:35:14.207795   13818 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:14.208816   13818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:14.208905   13818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:35:14.223163   13818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 19:35:14.223258   13818 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:35:14.223480   13818 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:14.223529   13818 cni.go:84] Creating CNI manager for ""
	I1205 19:35:14.223537   13818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:14.223547   13818 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:35:14.223554   13818 start_flags.go:323] config:
	{Name:addons-489440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:14.223678   13818 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:14.225732   13818 out.go:177] * Starting control plane node addons-489440 in cluster addons-489440
	I1205 19:35:14.227282   13818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:14.227325   13818 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:14.227332   13818 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:14.227425   13818 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:35:14.227439   13818 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:35:14.227738   13818 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/config.json ...
	I1205 19:35:14.227762   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/config.json: {Name:mka12c39246080142bf01600aa551525066e8634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:14.227917   13818 start.go:365] acquiring machines lock for addons-489440: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:35:14.227986   13818 start.go:369] acquired machines lock for "addons-489440" in 51.373µs
	I1205 19:35:14.228016   13818 start.go:93] Provisioning new machine with config: &{Name:addons-489440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:35:14.228071   13818 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:35:14.229883   13818 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 19:35:14.230032   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:14.230075   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:14.244307   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I1205 19:35:14.244802   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:14.245379   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:35:14.245404   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:14.245735   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:14.245901   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:14.246015   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:14.246148   13818 start.go:159] libmachine.API.Create for "addons-489440" (driver="kvm2")
	I1205 19:35:14.246178   13818 client.go:168] LocalClient.Create starting
	I1205 19:35:14.246225   13818 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem
	I1205 19:35:14.338616   13818 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem
	I1205 19:35:14.514178   13818 main.go:141] libmachine: Running pre-create checks...
	I1205 19:35:14.514205   13818 main.go:141] libmachine: (addons-489440) Calling .PreCreateCheck
	I1205 19:35:14.514731   13818 main.go:141] libmachine: (addons-489440) Calling .GetConfigRaw
	I1205 19:35:14.515125   13818 main.go:141] libmachine: Creating machine...
	I1205 19:35:14.515140   13818 main.go:141] libmachine: (addons-489440) Calling .Create
	I1205 19:35:14.515309   13818 main.go:141] libmachine: (addons-489440) Creating KVM machine...
	I1205 19:35:14.516461   13818 main.go:141] libmachine: (addons-489440) DBG | found existing default KVM network
	I1205 19:35:14.517360   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.517194   13840 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1205 19:35:14.523271   13818 main.go:141] libmachine: (addons-489440) DBG | trying to create private KVM network mk-addons-489440 192.168.39.0/24...
	I1205 19:35:14.590428   13818 main.go:141] libmachine: (addons-489440) DBG | private KVM network mk-addons-489440 192.168.39.0/24 created
	I1205 19:35:14.590454   13818 main.go:141] libmachine: (addons-489440) Setting up store path in /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440 ...
	I1205 19:35:14.590467   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.590404   13840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:14.590491   13818 main.go:141] libmachine: (addons-489440) Building disk image from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 19:35:14.590582   13818 main.go:141] libmachine: (addons-489440) Downloading /home/jenkins/minikube-integration/17731-6237/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1205 19:35:14.810034   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.809902   13840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa...
	I1205 19:35:14.920613   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.920447   13840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/addons-489440.rawdisk...
	I1205 19:35:14.920663   13818 main.go:141] libmachine: (addons-489440) DBG | Writing magic tar header
	I1205 19:35:14.920680   13818 main.go:141] libmachine: (addons-489440) DBG | Writing SSH key tar header
	I1205 19:35:14.920694   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.920582   13840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440 ...
	I1205 19:35:14.920711   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440
	I1205 19:35:14.920727   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines
	I1205 19:35:14.920747   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440 (perms=drwx------)
	I1205 19:35:14.920769   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:14.920780   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237
	I1205 19:35:14.920791   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:35:14.920808   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:35:14.920828   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home
	I1205 19:35:14.920841   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:35:14.920850   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube (perms=drwxr-xr-x)
	I1205 19:35:14.920858   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237 (perms=drwxrwxr-x)
	I1205 19:35:14.920868   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:35:14.920875   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:35:14.920883   13818 main.go:141] libmachine: (addons-489440) Creating domain...
	I1205 19:35:14.920890   13818 main.go:141] libmachine: (addons-489440) DBG | Skipping /home - not owner
	I1205 19:35:14.921921   13818 main.go:141] libmachine: (addons-489440) define libvirt domain using xml: 
	I1205 19:35:14.921943   13818 main.go:141] libmachine: (addons-489440) <domain type='kvm'>
	I1205 19:35:14.921954   13818 main.go:141] libmachine: (addons-489440)   <name>addons-489440</name>
	I1205 19:35:14.921964   13818 main.go:141] libmachine: (addons-489440)   <memory unit='MiB'>4000</memory>
	I1205 19:35:14.921973   13818 main.go:141] libmachine: (addons-489440)   <vcpu>2</vcpu>
	I1205 19:35:14.921982   13818 main.go:141] libmachine: (addons-489440)   <features>
	I1205 19:35:14.921988   13818 main.go:141] libmachine: (addons-489440)     <acpi/>
	I1205 19:35:14.922002   13818 main.go:141] libmachine: (addons-489440)     <apic/>
	I1205 19:35:14.922008   13818 main.go:141] libmachine: (addons-489440)     <pae/>
	I1205 19:35:14.922016   13818 main.go:141] libmachine: (addons-489440)     
	I1205 19:35:14.922024   13818 main.go:141] libmachine: (addons-489440)   </features>
	I1205 19:35:14.922034   13818 main.go:141] libmachine: (addons-489440)   <cpu mode='host-passthrough'>
	I1205 19:35:14.922042   13818 main.go:141] libmachine: (addons-489440)   
	I1205 19:35:14.922057   13818 main.go:141] libmachine: (addons-489440)   </cpu>
	I1205 19:35:14.922070   13818 main.go:141] libmachine: (addons-489440)   <os>
	I1205 19:35:14.922084   13818 main.go:141] libmachine: (addons-489440)     <type>hvm</type>
	I1205 19:35:14.922108   13818 main.go:141] libmachine: (addons-489440)     <boot dev='cdrom'/>
	I1205 19:35:14.922119   13818 main.go:141] libmachine: (addons-489440)     <boot dev='hd'/>
	I1205 19:35:14.922125   13818 main.go:141] libmachine: (addons-489440)     <bootmenu enable='no'/>
	I1205 19:35:14.922149   13818 main.go:141] libmachine: (addons-489440)   </os>
	I1205 19:35:14.922177   13818 main.go:141] libmachine: (addons-489440)   <devices>
	I1205 19:35:14.922188   13818 main.go:141] libmachine: (addons-489440)     <disk type='file' device='cdrom'>
	I1205 19:35:14.922200   13818 main.go:141] libmachine: (addons-489440)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/boot2docker.iso'/>
	I1205 19:35:14.922211   13818 main.go:141] libmachine: (addons-489440)       <target dev='hdc' bus='scsi'/>
	I1205 19:35:14.922219   13818 main.go:141] libmachine: (addons-489440)       <readonly/>
	I1205 19:35:14.922226   13818 main.go:141] libmachine: (addons-489440)     </disk>
	I1205 19:35:14.922236   13818 main.go:141] libmachine: (addons-489440)     <disk type='file' device='disk'>
	I1205 19:35:14.922245   13818 main.go:141] libmachine: (addons-489440)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:35:14.922256   13818 main.go:141] libmachine: (addons-489440)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/addons-489440.rawdisk'/>
	I1205 19:35:14.922262   13818 main.go:141] libmachine: (addons-489440)       <target dev='hda' bus='virtio'/>
	I1205 19:35:14.922293   13818 main.go:141] libmachine: (addons-489440)     </disk>
	I1205 19:35:14.922306   13818 main.go:141] libmachine: (addons-489440)     <interface type='network'>
	I1205 19:35:14.922318   13818 main.go:141] libmachine: (addons-489440)       <source network='mk-addons-489440'/>
	I1205 19:35:14.922329   13818 main.go:141] libmachine: (addons-489440)       <model type='virtio'/>
	I1205 19:35:14.922342   13818 main.go:141] libmachine: (addons-489440)     </interface>
	I1205 19:35:14.922354   13818 main.go:141] libmachine: (addons-489440)     <interface type='network'>
	I1205 19:35:14.922368   13818 main.go:141] libmachine: (addons-489440)       <source network='default'/>
	I1205 19:35:14.922379   13818 main.go:141] libmachine: (addons-489440)       <model type='virtio'/>
	I1205 19:35:14.922391   13818 main.go:141] libmachine: (addons-489440)     </interface>
	I1205 19:35:14.922407   13818 main.go:141] libmachine: (addons-489440)     <serial type='pty'>
	I1205 19:35:14.922418   13818 main.go:141] libmachine: (addons-489440)       <target port='0'/>
	I1205 19:35:14.922426   13818 main.go:141] libmachine: (addons-489440)     </serial>
	I1205 19:35:14.922434   13818 main.go:141] libmachine: (addons-489440)     <console type='pty'>
	I1205 19:35:14.922440   13818 main.go:141] libmachine: (addons-489440)       <target type='serial' port='0'/>
	I1205 19:35:14.922448   13818 main.go:141] libmachine: (addons-489440)     </console>
	I1205 19:35:14.922455   13818 main.go:141] libmachine: (addons-489440)     <rng model='virtio'>
	I1205 19:35:14.922464   13818 main.go:141] libmachine: (addons-489440)       <backend model='random'>/dev/random</backend>
	I1205 19:35:14.922471   13818 main.go:141] libmachine: (addons-489440)     </rng>
	I1205 19:35:14.922512   13818 main.go:141] libmachine: (addons-489440)     
	I1205 19:35:14.922619   13818 main.go:141] libmachine: (addons-489440)     
	I1205 19:35:14.922645   13818 main.go:141] libmachine: (addons-489440)   </devices>
	I1205 19:35:14.922660   13818 main.go:141] libmachine: (addons-489440) </domain>
	I1205 19:35:14.922677   13818 main.go:141] libmachine: (addons-489440) 
	I1205 19:35:14.928410   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:97:db:62 in network default
	I1205 19:35:14.928993   13818 main.go:141] libmachine: (addons-489440) Ensuring networks are active...
	I1205 19:35:14.929024   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:14.929743   13818 main.go:141] libmachine: (addons-489440) Ensuring network default is active
	I1205 19:35:14.930115   13818 main.go:141] libmachine: (addons-489440) Ensuring network mk-addons-489440 is active
	I1205 19:35:14.930659   13818 main.go:141] libmachine: (addons-489440) Getting domain xml...
	I1205 19:35:14.931474   13818 main.go:141] libmachine: (addons-489440) Creating domain...
	I1205 19:35:16.395798   13818 main.go:141] libmachine: (addons-489440) Waiting to get IP...
	I1205 19:35:16.396657   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:16.397119   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:16.397141   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:16.397093   13840 retry.go:31] will retry after 264.881612ms: waiting for machine to come up
	I1205 19:35:16.663810   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:16.664267   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:16.664290   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:16.664224   13840 retry.go:31] will retry after 237.966873ms: waiting for machine to come up
	I1205 19:35:16.903971   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:16.904567   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:16.904600   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:16.904505   13840 retry.go:31] will retry after 365.814567ms: waiting for machine to come up
	I1205 19:35:17.272180   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:17.272685   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:17.272714   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:17.272642   13840 retry.go:31] will retry after 609.794264ms: waiting for machine to come up
	I1205 19:35:17.884599   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:17.885068   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:17.885091   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:17.885032   13840 retry.go:31] will retry after 503.152832ms: waiting for machine to come up
	I1205 19:35:18.389634   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:18.390035   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:18.390058   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:18.389986   13840 retry.go:31] will retry after 692.863454ms: waiting for machine to come up
	I1205 19:35:19.085146   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:19.085648   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:19.085669   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:19.085597   13840 retry.go:31] will retry after 833.550331ms: waiting for machine to come up
	I1205 19:35:19.920316   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:19.920845   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:19.920875   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:19.920778   13840 retry.go:31] will retry after 1.156757357s: waiting for machine to come up
	I1205 19:35:21.079096   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:21.079560   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:21.079598   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:21.079519   13840 retry.go:31] will retry after 1.491242494s: waiting for machine to come up
	I1205 19:35:22.573348   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:22.573837   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:22.573910   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:22.573826   13840 retry.go:31] will retry after 1.895533579s: waiting for machine to come up
	I1205 19:35:24.470986   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:24.471498   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:24.471533   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:24.471442   13840 retry.go:31] will retry after 2.736768173s: waiting for machine to come up
	I1205 19:35:27.209396   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:27.209937   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:27.209962   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:27.209872   13840 retry.go:31] will retry after 3.057692651s: waiting for machine to come up
	I1205 19:35:30.269596   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:30.270124   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:30.270159   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:30.270101   13840 retry.go:31] will retry after 4.032017669s: waiting for machine to come up
	I1205 19:35:34.305239   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:34.305672   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:34.305696   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:34.305634   13840 retry.go:31] will retry after 3.851038931s: waiting for machine to come up
	I1205 19:35:38.161676   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.162028   13818 main.go:141] libmachine: (addons-489440) Found IP for machine: 192.168.39.118
	I1205 19:35:38.162055   13818 main.go:141] libmachine: (addons-489440) Reserving static IP address...
	I1205 19:35:38.162066   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has current primary IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.162435   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find host DHCP lease matching {name: "addons-489440", mac: "52:54:00:e7:05:ac", ip: "192.168.39.118"} in network mk-addons-489440
	I1205 19:35:38.234192   13818 main.go:141] libmachine: (addons-489440) DBG | Getting to WaitForSSH function...
	I1205 19:35:38.234222   13818 main.go:141] libmachine: (addons-489440) Reserved static IP address: 192.168.39.118
	I1205 19:35:38.234235   13818 main.go:141] libmachine: (addons-489440) Waiting for SSH to be available...
	I1205 19:35:38.236866   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.237320   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.237351   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.237547   13818 main.go:141] libmachine: (addons-489440) DBG | Using SSH client type: external
	I1205 19:35:38.237586   13818 main.go:141] libmachine: (addons-489440) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa (-rw-------)
	I1205 19:35:38.237628   13818 main.go:141] libmachine: (addons-489440) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:35:38.237642   13818 main.go:141] libmachine: (addons-489440) DBG | About to run SSH command:
	I1205 19:35:38.237656   13818 main.go:141] libmachine: (addons-489440) DBG | exit 0
	I1205 19:35:38.342519   13818 main.go:141] libmachine: (addons-489440) DBG | SSH cmd err, output: <nil>: 
	I1205 19:35:38.342749   13818 main.go:141] libmachine: (addons-489440) KVM machine creation complete!
	I1205 19:35:38.343095   13818 main.go:141] libmachine: (addons-489440) Calling .GetConfigRaw
	I1205 19:35:38.343738   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:38.343947   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:38.344093   13818 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:35:38.344109   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:35:38.345245   13818 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:35:38.345259   13818 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:35:38.345266   13818 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:35:38.345274   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.347322   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.347645   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.347675   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.347791   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.347948   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.348072   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.348220   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.348373   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.348693   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.348705   13818 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:35:38.477448   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:38.477471   13818 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:35:38.477478   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.480150   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.480491   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.480517   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.480693   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.480896   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.481059   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.481231   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.481395   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.481716   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.481728   13818 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:35:38.611268   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1205 19:35:38.611340   13818 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:35:38.611347   13818 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:35:38.611355   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:38.611582   13818 buildroot.go:166] provisioning hostname "addons-489440"
	I1205 19:35:38.611608   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:38.611755   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.614217   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.614556   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.614578   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.614744   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.614917   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.615057   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.615191   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.615360   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.615658   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.615671   13818 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-489440 && echo "addons-489440" | sudo tee /etc/hostname
	I1205 19:35:38.760205   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-489440
	
	I1205 19:35:38.760229   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.762695   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.763022   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.763053   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.763188   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.763386   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.763584   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.763724   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.763909   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.764257   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.764274   13818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-489440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-489440/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-489440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:35:38.902054   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:38.902082   13818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 19:35:38.902097   13818 buildroot.go:174] setting up certificates
	I1205 19:35:38.902107   13818 provision.go:83] configureAuth start
	I1205 19:35:38.902115   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:38.902409   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:38.904824   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.905112   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.905149   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.905327   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.907463   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.907774   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.907798   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.907957   13818 provision.go:138] copyHostCerts
	I1205 19:35:38.908027   13818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 19:35:38.908197   13818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 19:35:38.908310   13818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 19:35:38.908427   13818 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.addons-489440 san=[192.168.39.118 192.168.39.118 localhost 127.0.0.1 minikube addons-489440]
	I1205 19:35:39.128128   13818 provision.go:172] copyRemoteCerts
	I1205 19:35:39.128212   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:35:39.128237   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.130499   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.130773   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.130802   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.130906   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.131078   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.131244   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.131384   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.228107   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:35:39.250519   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:35:39.272663   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:35:39.294462   13818 provision.go:86] duration metric: configureAuth took 392.344209ms
	I1205 19:35:39.294487   13818 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:35:39.294665   13818 config.go:182] Loaded profile config "addons-489440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:35:39.294751   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.297323   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.297632   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.297659   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.297889   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.298086   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.298247   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.298408   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.298606   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:39.298961   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:39.298977   13818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:35:39.633812   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:35:39.633841   13818 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:35:39.633877   13818 main.go:141] libmachine: (addons-489440) Calling .GetURL
	I1205 19:35:39.635073   13818 main.go:141] libmachine: (addons-489440) DBG | Using libvirt version 6000000
	I1205 19:35:39.637564   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.637930   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.637968   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.638095   13818 main.go:141] libmachine: Docker is up and running!
	I1205 19:35:39.638112   13818 main.go:141] libmachine: Reticulating splines...
	I1205 19:35:39.638120   13818 client.go:171] LocalClient.Create took 25.39193102s
	I1205 19:35:39.638141   13818 start.go:167] duration metric: libmachine.API.Create for "addons-489440" took 25.391993142s
	I1205 19:35:39.638154   13818 start.go:300] post-start starting for "addons-489440" (driver="kvm2")
	I1205 19:35:39.638166   13818 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:35:39.638188   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.638458   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:35:39.638490   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.640792   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.641135   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.641166   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.641310   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.641491   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.641642   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.641767   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.735852   13818 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:35:39.740122   13818 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 19:35:39.740142   13818 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 19:35:39.740197   13818 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 19:35:39.740217   13818 start.go:303] post-start completed in 102.057615ms
	I1205 19:35:39.740246   13818 main.go:141] libmachine: (addons-489440) Calling .GetConfigRaw
	I1205 19:35:39.740860   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:39.743930   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.744237   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.744267   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.744435   13818 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/config.json ...
	I1205 19:35:39.744590   13818 start.go:128] duration metric: createHost completed in 25.516504912s
	I1205 19:35:39.744608   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.746401   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.746791   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.746818   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.746921   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.747087   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.747249   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.747403   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.747545   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:39.747904   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:39.747917   13818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 19:35:39.879192   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701804939.862103465
	
	I1205 19:35:39.879218   13818 fix.go:206] guest clock: 1701804939.862103465
	I1205 19:35:39.879228   13818 fix.go:219] Guest: 2023-12-05 19:35:39.862103465 +0000 UTC Remote: 2023-12-05 19:35:39.744599227 +0000 UTC m=+25.630995544 (delta=117.504238ms)
	I1205 19:35:39.879280   13818 fix.go:190] guest clock delta is within tolerance: 117.504238ms
	I1205 19:35:39.879287   13818 start.go:83] releasing machines lock for "addons-489440", held for 25.651287508s
	I1205 19:35:39.879321   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.879550   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:39.881938   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.882230   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.882258   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.882392   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.882915   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.883078   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.883163   13818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:35:39.883212   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.883334   13818 ssh_runner.go:195] Run: cat /version.json
	I1205 19:35:39.883361   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.885833   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886119   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.886150   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886205   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886363   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.886532   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.886564   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.886587   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886756   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.886857   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.886930   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.887002   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.887192   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.887352   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.975566   13818 ssh_runner.go:195] Run: systemctl --version
	I1205 19:35:40.034134   13818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:35:40.194302   13818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:35:40.200537   13818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:35:40.200624   13818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:35:40.214601   13818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:35:40.214621   13818 start.go:475] detecting cgroup driver to use...
	I1205 19:35:40.214682   13818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:35:40.227363   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:35:40.239156   13818 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:35:40.239208   13818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:35:40.251130   13818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:35:40.263074   13818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:35:40.364842   13818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:35:40.491312   13818 docker.go:219] disabling docker service ...
	I1205 19:35:40.491378   13818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:35:40.503958   13818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:35:40.515603   13818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:35:40.614310   13818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:35:40.722603   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:35:40.735155   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:35:40.752438   13818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:35:40.752502   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.762547   13818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:35:40.762641   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.772533   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.782168   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.791508   13818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:35:40.801427   13818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:35:40.810095   13818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:35:40.810151   13818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:35:40.823249   13818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:35:40.832060   13818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:35:40.954077   13818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:35:41.122395   13818 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:35:41.122484   13818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:35:41.131169   13818 start.go:543] Will wait 60s for crictl version
	I1205 19:35:41.131265   13818 ssh_runner.go:195] Run: which crictl
	I1205 19:35:41.135480   13818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:35:41.173240   13818 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 19:35:41.173342   13818 ssh_runner.go:195] Run: crio --version
	I1205 19:35:41.221589   13818 ssh_runner.go:195] Run: crio --version
	I1205 19:35:41.268383   13818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 19:35:41.269793   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:41.272292   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:41.272659   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:41.272690   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:41.272901   13818 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:35:41.277169   13818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:41.291400   13818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:41.291447   13818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:41.334338   13818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 19:35:41.334415   13818 ssh_runner.go:195] Run: which lz4
	I1205 19:35:41.338889   13818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 19:35:41.342953   13818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:35:41.342980   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 19:35:42.967336   13818 crio.go:444] Took 1.628478 seconds to copy over tarball
	I1205 19:35:42.967414   13818 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:35:46.370512   13818 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403064024s)
	I1205 19:35:46.370535   13818 crio.go:451] Took 3.403171 seconds to extract the tarball
	I1205 19:35:46.370544   13818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:35:46.412235   13818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:46.483393   13818 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:35:46.483418   13818 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:35:46.483483   13818 ssh_runner.go:195] Run: crio config
	I1205 19:35:46.553207   13818 cni.go:84] Creating CNI manager for ""
	I1205 19:35:46.553229   13818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:46.553248   13818 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:35:46.553274   13818 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-489440 NodeName:addons-489440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:35:46.553424   13818 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-489440"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:35:46.553501   13818 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-489440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:35:46.553557   13818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:35:46.562980   13818 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:35:46.563047   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:35:46.571647   13818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1205 19:35:46.588177   13818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:35:46.604192   13818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1205 19:35:46.620388   13818 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I1205 19:35:46.624349   13818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:46.636795   13818 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440 for IP: 192.168.39.118
	I1205 19:35:46.636838   13818 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.636992   13818 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 19:35:46.709565   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt ...
	I1205 19:35:46.709597   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt: {Name:mkd92853ad4ee64ebff4e435b2cc586d9215b621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.709761   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key ...
	I1205 19:35:46.709773   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key: {Name:mk2fb5b04c6af103934aa88af1b87a7b3539dcb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.709840   13818 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 19:35:46.809161   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt ...
	I1205 19:35:46.809190   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt: {Name:mk36bcbd2bcb143bdd57f2b15aecacacbfec2fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.809361   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key ...
	I1205 19:35:46.809375   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key: {Name:mk5598c61dc87d140bae66e4b9645218cf3cf0b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.809494   13818 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.key
	I1205 19:35:46.809509   13818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt with IP's: []
	I1205 19:35:46.907040   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt ...
	I1205 19:35:46.907070   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: {Name:mk00e3f7f7afbf785ec9d44dafa974020feeae6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.907258   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.key ...
	I1205 19:35:46.907278   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.key: {Name:mka20324584ee4250f8c8033ad479bb3a69812f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.907378   13818 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9
	I1205 19:35:46.907397   13818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9 with IP's: [192.168.39.118 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:35:47.111932   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9 ...
	I1205 19:35:47.111965   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9: {Name:mk0dd9b4da1bab9c2a80e4dbfd9329f14ba21be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.112141   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9 ...
	I1205 19:35:47.112159   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9: {Name:mk4f94cb525e3c041d4cb708248e6b593206a768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.112249   13818 certs.go:337] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt
	I1205 19:35:47.112342   13818 certs.go:341] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key
	I1205 19:35:47.112393   13818 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key
	I1205 19:35:47.112406   13818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt with IP's: []
	I1205 19:35:47.260538   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt ...
	I1205 19:35:47.260573   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt: {Name:mk1b301692f07606254d56653011c58f802595fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.260757   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key ...
	I1205 19:35:47.260773   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key: {Name:mk5ca9b7e008662710cedf6525e99b1f35be4b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.260982   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:35:47.261031   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:35:47.261066   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:35:47.261112   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 19:35:47.261704   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:35:47.287542   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:35:47.316630   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:35:47.341984   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:35:47.366320   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:35:47.390383   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:35:47.413463   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:35:47.436130   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:35:47.459541   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:35:47.482746   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:35:47.499282   13818 ssh_runner.go:195] Run: openssl version
	I1205 19:35:47.504989   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:35:47.514669   13818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:47.519297   13818 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:47.519357   13818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:47.524788   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:35:47.534792   13818 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:35:47.539177   13818 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:35:47.539236   13818 kubeadm.go:404] StartCluster: {Name:addons-489440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:47.539321   13818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:35:47.539403   13818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:35:47.576741   13818 cri.go:89] found id: ""
	I1205 19:35:47.576827   13818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:35:47.585956   13818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:35:47.594785   13818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:35:47.604006   13818 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:35:47.604051   13818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:35:47.656547   13818 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:35:47.656679   13818 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:35:47.799861   13818 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:35:47.800013   13818 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:35:47.800120   13818 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:35:48.039101   13818 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:35:48.276732   13818 out.go:204]   - Generating certificates and keys ...
	I1205 19:35:48.276834   13818 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:35:48.276919   13818 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:35:48.330217   13818 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:35:48.397989   13818 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:35:48.535133   13818 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:35:48.607491   13818 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:35:48.731145   13818 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:35:48.731323   13818 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-489440 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1205 19:35:48.892685   13818 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:35:48.892894   13818 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-489440 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1205 19:35:48.960467   13818 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:35:49.009016   13818 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:35:49.166596   13818 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:35:49.166711   13818 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:35:49.222650   13818 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:35:49.446690   13818 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:35:49.653181   13818 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:35:49.712751   13818 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:35:49.713383   13818 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:35:49.715670   13818 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:35:49.717849   13818 out.go:204]   - Booting up control plane ...
	I1205 19:35:49.718008   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:35:49.718134   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:35:49.718246   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:35:49.735826   13818 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:35:49.736644   13818 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:35:49.736777   13818 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:35:49.866371   13818 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:35:57.369286   13818 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504851 seconds
	I1205 19:35:57.369440   13818 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:35:57.396596   13818 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:35:57.931766   13818 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:35:57.932038   13818 kubeadm.go:322] [mark-control-plane] Marking the node addons-489440 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:35:58.445481   13818 kubeadm.go:322] [bootstrap-token] Using token: zjs04c.xzfumu8bjkpzqcv2
	I1205 19:35:58.447342   13818 out.go:204]   - Configuring RBAC rules ...
	I1205 19:35:58.447510   13818 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:35:58.452534   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:35:58.460167   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:35:58.470254   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:35:58.474314   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:35:58.478698   13818 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:35:58.500772   13818 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:35:58.781320   13818 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:35:58.902070   13818 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:35:58.902116   13818 kubeadm.go:322] 
	I1205 19:35:58.902202   13818 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:35:58.902216   13818 kubeadm.go:322] 
	I1205 19:35:58.902351   13818 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:35:58.902363   13818 kubeadm.go:322] 
	I1205 19:35:58.902396   13818 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:35:58.902484   13818 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:35:58.902559   13818 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:35:58.902579   13818 kubeadm.go:322] 
	I1205 19:35:58.902666   13818 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:35:58.902677   13818 kubeadm.go:322] 
	I1205 19:35:58.902745   13818 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:35:58.902755   13818 kubeadm.go:322] 
	I1205 19:35:58.902828   13818 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:35:58.902908   13818 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:35:58.902984   13818 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:35:58.902999   13818 kubeadm.go:322] 
	I1205 19:35:58.903102   13818 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:35:58.903196   13818 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:35:58.903209   13818 kubeadm.go:322] 
	I1205 19:35:58.903318   13818 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zjs04c.xzfumu8bjkpzqcv2 \
	I1205 19:35:58.903468   13818 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 19:35:58.903537   13818 kubeadm.go:322] 	--control-plane 
	I1205 19:35:58.903554   13818 kubeadm.go:322] 
	I1205 19:35:58.903653   13818 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:35:58.903662   13818 kubeadm.go:322] 
	I1205 19:35:58.903760   13818 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zjs04c.xzfumu8bjkpzqcv2 \
	I1205 19:35:58.903885   13818 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 19:35:58.904648   13818 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:35:58.904667   13818 cni.go:84] Creating CNI manager for ""
	I1205 19:35:58.904674   13818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:58.906642   13818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 19:35:58.908294   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 19:35:58.938165   13818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 19:35:59.006939   13818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:35:59.007005   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.007005   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=addons-489440 minikube.k8s.io/updated_at=2023_12_05T19_35_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.276052   13818 ops.go:34] apiserver oom_adj: -16
	I1205 19:35:59.276224   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.372513   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.963459   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.463150   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.963571   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.463883   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.962859   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.463491   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.963499   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.462934   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.962885   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.463799   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.963425   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.462931   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.963219   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.463441   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.963297   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.463098   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.963626   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:08.463632   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:08.963922   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:09.463162   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:09.963624   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:10.463115   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:10.963505   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:11.113708   13818 kubeadm.go:1088] duration metric: took 12.106755631s to wait for elevateKubeSystemPrivileges.
	I1205 19:36:11.113742   13818 kubeadm.go:406] StartCluster complete in 23.574510275s
	I1205 19:36:11.113765   13818 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:11.113887   13818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:36:11.114233   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:11.114452   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:36:11.114520   13818 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1205 19:36:11.114593   13818 addons.go:69] Setting volumesnapshots=true in profile "addons-489440"
	I1205 19:36:11.114604   13818 addons.go:69] Setting ingress-dns=true in profile "addons-489440"
	I1205 19:36:11.114617   13818 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-489440"
	I1205 19:36:11.114627   13818 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-489440"
	I1205 19:36:11.114629   13818 addons.go:231] Setting addon ingress-dns=true in "addons-489440"
	I1205 19:36:11.114631   13818 addons.go:69] Setting storage-provisioner=true in profile "addons-489440"
	I1205 19:36:11.114644   13818 addons.go:69] Setting inspektor-gadget=true in profile "addons-489440"
	I1205 19:36:11.114657   13818 addons.go:69] Setting registry=true in profile "addons-489440"
	I1205 19:36:11.114663   13818 addons.go:231] Setting addon inspektor-gadget=true in "addons-489440"
	I1205 19:36:11.114667   13818 addons.go:231] Setting addon registry=true in "addons-489440"
	I1205 19:36:11.114679   13818 addons.go:69] Setting default-storageclass=true in profile "addons-489440"
	I1205 19:36:11.114695   13818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-489440"
	I1205 19:36:11.114706   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114715   13818 addons.go:69] Setting gcp-auth=true in profile "addons-489440"
	I1205 19:36:11.114720   13818 config.go:182] Loaded profile config "addons-489440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:11.114737   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114767   13818 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-489440"
	I1205 19:36:11.114765   13818 addons.go:69] Setting cloud-spanner=true in profile "addons-489440"
	I1205 19:36:11.114798   13818 addons.go:231] Setting addon cloud-spanner=true in "addons-489440"
	I1205 19:36:11.114637   13818 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-489440"
	I1205 19:36:11.114708   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114847   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114659   13818 addons.go:231] Setting addon storage-provisioner=true in "addons-489440"
	I1205 19:36:11.114940   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115159   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115168   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115187   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115159   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115215   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115218   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115234   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.114648   13818 addons.go:69] Setting metrics-server=true in profile "addons-489440"
	I1205 19:36:11.115272   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115278   13818 addons.go:231] Setting addon metrics-server=true in "addons-489440"
	I1205 19:36:11.114742   13818 mustload.go:65] Loading cluster: addons-489440
	I1205 19:36:11.114804   13818 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-489440"
	I1205 19:36:11.114748   13818 addons.go:69] Setting helm-tiller=true in profile "addons-489440"
	I1205 19:36:11.115304   13818 addons.go:231] Setting addon helm-tiller=true in "addons-489440"
	I1205 19:36:11.114759   13818 addons.go:69] Setting ingress=true in profile "addons-489440"
	I1205 19:36:11.115317   13818 addons.go:231] Setting addon ingress=true in "addons-489440"
	I1205 19:36:11.114622   13818 addons.go:231] Setting addon volumesnapshots=true in "addons-489440"
	I1205 19:36:11.115321   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.114837   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115357   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115370   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115395   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115444   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114640   13818 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-489440"
	I1205 19:36:11.115256   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115758   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115781   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115886   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115911   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116088   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115763   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116119   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116140   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116193   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116222   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116418   13818 config.go:182] Loaded profile config "addons-489440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:11.116474   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116506   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116609   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116636   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116692   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.134340   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41105
	I1205 19:36:11.134836   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.134937   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I1205 19:36:11.135085   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1205 19:36:11.135209   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.135516   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.135533   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.135880   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.135962   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.135970   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.135979   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.136482   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.136521   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.136980   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.137086   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.137107   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.137530   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.137570   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.138081   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.138604   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I1205 19:36:11.138639   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I1205 19:36:11.138607   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.138681   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.138984   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.139050   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.139436   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.139452   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.139574   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.139586   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.139759   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.140174   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.140196   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.140294   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.154832   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.154898   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.155035   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.154906   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.155496   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.155541   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.168741   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I1205 19:36:11.169434   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.170462   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.170482   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.170831   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.170915   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43193
	I1205 19:36:11.171299   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.171708   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.171724   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.172024   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.172243   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.173142   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.174875   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.177307   13818 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1205 19:36:11.175339   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.178987   13818 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1205 19:36:11.179008   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1205 19:36:11.179029   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.179250   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I1205 19:36:11.180917   13818 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:36:11.182760   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.181098   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1205 19:36:11.181520   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.182207   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.182960   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.184209   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.184221   13818 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1205 19:36:11.185562   13818 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:36:11.185578   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1205 19:36:11.185596   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.184236   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.184330   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.184670   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.185756   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.185777   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.185782   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.186249   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.186266   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.186658   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.186857   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.187069   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.187610   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.187647   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.189409   13818 addons.go:231] Setting addon default-storageclass=true in "addons-489440"
	I1205 19:36:11.189451   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.189825   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.189857   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.190071   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.190702   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.190714   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I1205 19:36:11.190724   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.190830   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.191008   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.191170   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.191282   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.192404   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.192919   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.192935   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.193270   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.193560   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.195104   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.197004   13818 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1205 19:36:11.199123   13818 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:11.199141   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:36:11.199161   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.198384   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I1205 19:36:11.198396   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I1205 19:36:11.201894   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I1205 19:36:11.202019   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1205 19:36:11.202136   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.202376   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.202781   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.202932   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.202955   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.203033   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I1205 19:36:11.203056   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.203370   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.203471   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.203491   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.203563   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.203579   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.203674   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.203692   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.203944   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I1205 19:36:11.204045   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.204071   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.204089   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.204102   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.204531   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.204555   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.204585   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.204656   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.204785   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.204815   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.204951   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.205099   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.205114   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.205292   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.205488   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.205588   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.205794   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.206097   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.206122   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.206446   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.206463   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.206809   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.207019   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.207151   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.207705   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.207722   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.207791   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1205 19:36:11.208896   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.209147   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.209241   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.211314   13818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:36:11.210171   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.210703   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.212751   13818 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:11.212764   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:36:11.212783   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.215350   13818 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1205 19:36:11.219614   13818 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1205 19:36:11.219633   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1205 19:36:11.219654   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.215231   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.219720   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.215596   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I1205 19:36:11.216382   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.219868   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.219901   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.216936   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.220736   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.220944   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.221116   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.222059   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.222337   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.222482   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.222967   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.223154   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.223182   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.223366   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.223386   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.223669   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.223735   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.223884   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.224016   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.224143   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.224319   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.224354   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.226517   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I1205 19:36:11.226949   13818 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-489440"
	I1205 19:36:11.227001   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.227004   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.227419   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.227458   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.227492   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.227507   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.227864   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.227971   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.229263   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I1205 19:36:11.229848   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.230050   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.232281   13818 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1205 19:36:11.230534   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.234172   13818 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:11.232039   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37267
	I1205 19:36:11.234186   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:36:11.234201   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.232313   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.232754   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I1205 19:36:11.234804   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.234818   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.234880   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.235103   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.235421   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.235440   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.235634   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I1205 19:36:11.235815   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.236004   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.236345   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.236359   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.236753   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.236933   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.237361   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.237404   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.237622   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.237716   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.237760   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.239222   13818 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1205 19:36:11.238733   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.240651   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.240679   13818 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:11.239663   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.240299   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.240740   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:36:11.240760   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.240761   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I1205 19:36:11.240810   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.240831   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.241000   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I1205 19:36:11.241018   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.241044   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.241284   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.241344   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.241343   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.241704   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.242216   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.244006   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1205 19:36:11.244076   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.243133   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.243949   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.242633   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.244604   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.245507   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:11.247081   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:11.245552   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.245569   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.248562   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.249988   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:36:11.245881   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.245992   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.247519   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.248713   13818 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:11.251383   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1205 19:36:11.251409   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:36:11.251433   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:36:11.251458   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.251494   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.251417   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.251690   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.251994   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.252016   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.252209   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.252399   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.252909   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.253355   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I1205 19:36:11.253675   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I1205 19:36:11.253961   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.254012   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.254189   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.254458   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.254472   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.256139   13818 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1205 19:36:11.254877   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.255291   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.256863   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257587   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.257618   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:36:11.257587   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.257334   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.257654   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257424   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.256906   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257715   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.257736   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257632   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:36:11.257758   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.258060   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.258067   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.258106   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.258174   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.258189   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.258206   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.258228   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.258286   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.258388   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.258433   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.260703   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.261011   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.262668   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:36:11.261509   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.262711   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.261691   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.262945   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.264165   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:36:11.264373   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.265633   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:36:11.265812   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.267719   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:36:11.269179   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:36:11.270431   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I1205 19:36:11.271804   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:36:11.270850   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.274266   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:36:11.273599   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.275620   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:36:11.274218   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I1205 19:36:11.274816   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I1205 19:36:11.275645   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.276859   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:36:11.276872   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:36:11.276887   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.277402   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.277866   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.277880   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.277979   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.278317   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.278339   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.278475   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.278492   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.278710   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.279018   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.279552   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.279743   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.280861   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.280910   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.281304   13818 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:11.281317   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:36:11.281332   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.281339   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.281360   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.281496   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.281579   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.283355   13818 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:36:11.284771   13818 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:36:11.283644   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.281741   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.284181   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.286294   13818 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:11.284800   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.286312   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:36:11.284979   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.284988   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.286329   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.286328   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.286492   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.286520   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.286636   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.288716   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.289078   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.289106   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.289273   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.289435   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.289576   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.289692   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	W1205 19:36:11.290502   13818 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41202->192.168.39.118:22: read: connection reset by peer
	I1205 19:36:11.290526   13818 retry.go:31] will retry after 195.705751ms: ssh: handshake failed: read tcp 192.168.39.1:41202->192.168.39.118:22: read: connection reset by peer
	I1205 19:36:11.430282   13818 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:36:11.430312   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:36:11.502972   13818 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1205 19:36:11.503002   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1205 19:36:11.518037   13818 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:11.518070   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:36:11.519990   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:11.540994   13818 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1205 19:36:11.541021   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1205 19:36:11.567629   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:36:11.572676   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:36:11.572700   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:36:11.578227   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:11.615151   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:36:11.615175   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:36:11.615524   13818 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-489440" context rescaled to 1 replicas
	I1205 19:36:11.615565   13818 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:11.617528   13818 out.go:177] * Verifying Kubernetes components...
	I1205 19:36:11.618982   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:36:11.637066   13818 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:36:11.637096   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:36:11.640280   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:11.648833   13818 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:11.648858   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1205 19:36:11.669257   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:11.694305   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:11.807391   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:11.813783   13818 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1205 19:36:11.813813   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1205 19:36:11.819832   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:11.875339   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:36:11.875367   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:36:11.941670   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:36:11.941695   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:36:12.119335   13818 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:36:12.119363   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:36:12.119585   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:12.136371   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:12.208221   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:36:12.208242   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:36:12.234143   13818 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1205 19:36:12.234171   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1205 19:36:12.265811   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:12.265833   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:36:12.284092   13818 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:36:12.284121   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:36:12.415386   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:36:12.415411   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:36:12.434752   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:12.442791   13818 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1205 19:36:12.442817   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1205 19:36:12.444322   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:36:12.444342   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:36:12.513776   13818 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:12.513796   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:36:12.519705   13818 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1205 19:36:12.519721   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1205 19:36:12.531041   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:36:12.531062   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:36:12.600068   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:12.611314   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:36:12.611342   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:36:12.632056   13818 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:36:12.632079   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1205 19:36:12.760040   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:36:12.760068   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:36:12.761012   13818 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:12.761024   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1205 19:36:12.818113   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:36:12.818137   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:36:12.822744   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:12.850508   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:36:12.850529   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:36:12.883590   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:12.883614   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:36:12.937198   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:15.658134   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.138105868s)
	I1205 19:36:15.658197   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:15.658211   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:15.658572   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:15.658635   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:15.658653   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:15.658663   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:15.658675   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:15.658981   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:15.658981   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:15.659012   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:16.979949   13818 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.412271769s)
	I1205 19:36:16.979981   13818 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:36:17.785590   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.207332815s)
	I1205 19:36:17.785643   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:17.785648   13818 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.16663415s)
	I1205 19:36:17.785656   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:17.785983   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:17.786001   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:17.786012   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:17.786014   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:17.786020   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:17.786290   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:17.786307   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:17.786633   13818 node_ready.go:35] waiting up to 6m0s for node "addons-489440" to be "Ready" ...
	I1205 19:36:17.954342   13818 node_ready.go:49] node "addons-489440" has status "Ready":"True"
	I1205 19:36:17.954366   13818 node_ready.go:38] duration metric: took 167.702127ms waiting for node "addons-489440" to be "Ready" ...
	I1205 19:36:17.954375   13818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:36:18.128632   13818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:18.610551   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:36:18.610598   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:18.613934   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:18.614515   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:18.614552   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:18.614774   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:18.615006   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:18.615180   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:18.615355   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:18.716708   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.07638677s)
	I1205 19:36:18.716767   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:18.716780   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:18.717095   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:18.717127   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:18.717144   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:18.717163   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:18.717497   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:18.717512   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:18.717520   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:18.972267   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:36:19.014804   13818 addons.go:231] Setting addon gcp-auth=true in "addons-489440"
	I1205 19:36:19.014870   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:19.015189   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:19.015216   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:19.029508   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I1205 19:36:19.029938   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:19.030366   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:19.030389   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:19.030708   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:19.031254   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:19.031285   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:19.045761   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43887
	I1205 19:36:19.046237   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:19.046696   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:19.046723   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:19.047041   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:19.047232   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:19.048877   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:19.049091   13818 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:36:19.049113   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:19.051789   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:19.052244   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:19.052265   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:19.052428   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:19.052606   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:19.052782   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:19.052949   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:20.417049   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:20.420728   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.751435639s)
	I1205 19:36:20.420776   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420777   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.726435776s)
	I1205 19:36:20.420789   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420816   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420831   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420829   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.613404053s)
	I1205 19:36:20.420863   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420873   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.60101463s)
	I1205 19:36:20.420883   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420889   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420898   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420942   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.30133276s)
	I1205 19:36:20.420971   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420993   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.284598573s)
	I1205 19:36:20.421010   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.421018   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.421040   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.421118   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.986337529s)
	I1205 19:36:20.421135   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.421144   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.421269   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.821170644s)
	W1205 19:36:20.421295   13818 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:20.421311   13818 retry.go:31] will retry after 207.414297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:20.421407   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.598634254s)
	I1205 19:36:20.421438   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.421450   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.422947   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.422944   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.422962   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.422973   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.422983   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.422987   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.422996   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423004   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423012   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423032   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423055   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423063   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423073   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423077   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423081   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423103   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423111   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423119   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423124   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423126   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423141   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423162   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423171   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423179   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423187   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423235   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423244   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423252   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423260   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423292   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423304   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423313   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423321   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423364   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423385   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423393   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423401   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423408   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423429   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423446   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423461   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423473   13818 addons.go:467] Verifying addon registry=true in "addons-489440"
	I1205 19:36:20.423511   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.425385   13818 out.go:177] * Verifying registry addon...
	I1205 19:36:20.423571   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423631   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423658   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423812   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423848   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423880   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423899   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.424083   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.424107   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.424268   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.424292   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.424402   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.424423   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.426943   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.426955   13818 addons.go:467] Verifying addon metrics-server=true in "addons-489440"
	I1205 19:36:20.426987   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.426987   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.426999   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427054   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427097   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427216   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427227   13818 addons.go:467] Verifying addon ingress=true in "addons-489440"
	I1205 19:36:20.429765   13818 out.go:177] * Verifying ingress addon...
	I1205 19:36:20.427804   13818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:36:20.432368   13818 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:36:20.451632   13818 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:20.451649   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.454665   13818 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:36:20.454688   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.462445   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.462465   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.462596   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.462613   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.462703   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.462705   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.462721   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.462922   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.462961   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.462971   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	W1205 19:36:20.463048   13818 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 19:36:20.467123   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.467314   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.629280   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:21.026747   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.090407   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.386480   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.449222303s)
	I1205 19:36:21.386517   13818 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.337402358s)
	I1205 19:36:21.386535   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:21.386549   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:21.388210   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:21.386925   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:21.386927   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:21.390794   13818 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1205 19:36:21.389486   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:21.392245   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:21.392254   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:21.392285   13818 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:36:21.392304   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:36:21.392511   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:21.392519   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:21.392527   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:21.392537   13818 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-489440"
	I1205 19:36:21.393814   13818 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:36:21.395580   13818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:36:21.432194   13818 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:36:21.432214   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:36:21.447404   13818 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:21.447425   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.471265   13818 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:21.471285   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1205 19:36:21.486063   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.508822   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.519682   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.527582   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:22.019273   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.034091   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.036785   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.476053   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.476268   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.522615   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.898057   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:22.974429   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.974791   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.997727   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.325681   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.696345118s)
	I1205 19:36:23.325753   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.325768   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.326123   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.326142   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.326152   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.326162   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.326501   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.326521   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.326524   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:23.478250   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.478431   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.514933   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.638869   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.111249736s)
	I1205 19:36:23.638938   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.638950   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.639299   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.639385   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:23.639391   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.639412   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.639432   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.639680   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.639697   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:23.639701   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.641375   13818 addons.go:467] Verifying addon gcp-auth=true in "addons-489440"
	I1205 19:36:23.644579   13818 out.go:177] * Verifying gcp-auth addon...
	I1205 19:36:23.647144   13818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:36:23.673386   13818 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:36:23.673408   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.702009   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.992963   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.996229   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.996580   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.210392   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.473593   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.474469   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.492846   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.706137   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.903048   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:24.975569   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.975634   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.993145   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.208328   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.477621   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.477636   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.501485   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.706046   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.973987   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.974311   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.991814   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.206317   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.473139   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:26.473208   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.491661   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.711054   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.974238   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.978558   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.000512   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.210421   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.410424   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:27.475972   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.475979   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:27.513885   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.710042   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.979207   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.981989   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.005884   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.205985   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.476140   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.476323   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.492941   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.709045   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.976745   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.977208   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.995844   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.209633   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.473065   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.473322   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.498338   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.706755   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.894308   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:29.974774   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.975543   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.992504   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.206493   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.473925   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.475920   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.492019   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.709621   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.974822   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.975678   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.991541   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.218830   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.473697   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.476046   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.504739   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.727257   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.910662   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:31.972503   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.974086   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.992424   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.216493   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.473674   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.474979   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.495685   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.712811   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.973519   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.975173   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.998746   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.208117   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.473221   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.474971   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:33.492059   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.706663   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.978376   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.983887   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.013021   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.219287   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:34.394314   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:34.475450   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:34.488699   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.493959   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.706663   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.006370   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.007703   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.008617   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.209000   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.476074   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.477586   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.498281   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.708621   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.992153   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.005297   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.005802   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.210716   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.407290   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:36.477007   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.481687   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.491645   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.710528   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.976114   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.987392   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.994029   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.206590   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.477949   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.482544   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.497525   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.708763   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.972671   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.973893   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.992095   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.207251   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.472751   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.475426   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.490677   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.707569   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.910467   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:38.977814   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.984009   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.994665   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.207243   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.696664   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.698278   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.699749   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.725705   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.972280   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.973645   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.991720   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.206813   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.472215   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.473053   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.492837   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.706980   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.973451   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.973783   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.992145   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.214931   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.399348   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:41.472167   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.472979   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.493176   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.715364   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.974852   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.975002   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.995292   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.206226   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.473373   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.476151   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.491754   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.707585   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.974319   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.975312   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.993263   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.206855   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.473489   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.473862   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.491801   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.706237   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.894758   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:43.973478   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.973881   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.992204   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.205737   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.473700   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.474690   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.507158   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.705938   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.973218   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.973685   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.992273   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.206543   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.473412   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.474963   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.492253   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.706867   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.909112   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:45.972582   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.976160   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.992634   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.206248   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.474205   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:46.474955   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.493821   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.706792   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.982246   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.982262   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.001334   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.206219   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.477057   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.478934   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.495774   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.706604   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.974740   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.975385   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.003210   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.206247   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.396790   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:48.473093   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.473780   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.492019   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.709158   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.975217   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.975893   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.992351   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.207401   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.475366   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.475656   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.493136   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.707554   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.979796   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.979980   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.993565   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.206260   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.574052   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.574301   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.578573   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:50.579776   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.707873   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.973349   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.973456   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.992778   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.205672   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.474649   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.474963   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.494496   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.706958   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.974058   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.975825   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.992393   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.207231   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.396466   13818 pod_ready.go:92] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.396488   13818 pod_ready.go:81] duration metric: took 34.26783129s waiting for pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.396497   13818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.399934   13818 pod_ready.go:97] error getting pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-tqsg5" not found
	I1205 19:36:52.399958   13818 pod_ready.go:81] duration metric: took 3.453349ms waiting for pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace to be "Ready" ...
	E1205 19:36:52.399967   13818 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-tqsg5" not found
	I1205 19:36:52.399973   13818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.420585   13818 pod_ready.go:92] pod "etcd-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.420618   13818 pod_ready.go:81] duration metric: took 20.637892ms waiting for pod "etcd-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.420646   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.447174   13818 pod_ready.go:92] pod "kube-apiserver-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.447202   13818 pod_ready.go:81] duration metric: took 26.548818ms waiting for pod "kube-apiserver-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.447215   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.479029   13818 pod_ready.go:92] pod "kube-controller-manager-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.479059   13818 pod_ready.go:81] duration metric: took 31.834453ms waiting for pod "kube-controller-manager-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.479075   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-69z6s" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.486100   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.504145   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.510173   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.595549   13818 pod_ready.go:92] pod "kube-proxy-69z6s" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.595581   13818 pod_ready.go:81] duration metric: took 116.498377ms waiting for pod "kube-proxy-69z6s" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.595596   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.706327   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.973678   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.978442   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.992723   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.996057   13818 pod_ready.go:92] pod "kube-scheduler-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.996083   13818 pod_ready.go:81] duration metric: took 400.479344ms waiting for pod "kube-scheduler-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.996096   13818 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:53.208007   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.474738   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.475431   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.491837   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:53.707130   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.973316   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.974814   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.991646   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.207334   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.474808   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.475221   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.494927   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.707519   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.972619   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.974162   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.993847   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.206401   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.301105   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:55.473160   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.476602   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.491848   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.706579   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.976093   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.976767   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.991941   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.206226   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.474938   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:56.475837   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.491682   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.706320   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.975618   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.978362   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.000866   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.205759   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.302861   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:57.482807   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.483041   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.497534   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.710685   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.975959   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.977510   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.994943   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.206461   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:58.477585   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.479583   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.494658   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.706595   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.143101   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.143611   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.149315   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.218911   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.329023   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:59.475858   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.476009   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.500144   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.705918   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.972342   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.973111   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.992065   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.209140   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.474233   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.476042   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.491732   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.705816   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.974241   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.975611   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.993934   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.207798   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.472946   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.474964   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.492969   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.706796   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.801471   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:01.972357   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.973832   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.991800   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.208137   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.473530   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.477987   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.491889   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.718351   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.974443   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.976247   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.991251   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.207239   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.300090   13818 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:03.300112   13818 pod_ready.go:81] duration metric: took 10.304006915s waiting for pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:03.300124   13818 pod_ready.go:38] duration metric: took 45.345740112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:03.300140   13818 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:37:03.300187   13818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:37:03.365095   13818 api_server.go:72] duration metric: took 51.74949574s to wait for apiserver process to appear ...
	I1205 19:37:03.365117   13818 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:37:03.365132   13818 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I1205 19:37:03.370236   13818 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I1205 19:37:03.371688   13818 api_server.go:141] control plane version: v1.28.4
	I1205 19:37:03.371708   13818 api_server.go:131] duration metric: took 6.583537ms to wait for apiserver health ...
	I1205 19:37:03.371717   13818 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:37:03.386855   13818 system_pods.go:59] 18 kube-system pods found
	I1205 19:37:03.386882   13818 system_pods.go:61] "coredns-5dd5756b68-bs76k" [9235bef7-f927-40da-967d-19ee49cafa9d] Running
	I1205 19:37:03.386886   13818 system_pods.go:61] "csi-hostpath-attacher-0" [748bd69f-a0cf-49f5-8001-0ed8a15a1143] Running
	I1205 19:37:03.386890   13818 system_pods.go:61] "csi-hostpath-resizer-0" [9bfc6e31-08e6-418a-b32f-38d30424a77b] Running
	I1205 19:37:03.386897   13818 system_pods.go:61] "csi-hostpathplugin-hv64h" [e20670c4-f6aa-45f8-9821-3fd6c17ef864] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:03.386902   13818 system_pods.go:61] "etcd-addons-489440" [27f493cc-d5c2-4b03-95ad-dbcf16ed1e74] Running
	I1205 19:37:03.386908   13818 system_pods.go:61] "kube-apiserver-addons-489440" [676e77c5-9038-49a6-9987-652837182816] Running
	I1205 19:37:03.386912   13818 system_pods.go:61] "kube-controller-manager-addons-489440" [16501b70-40f4-4f63-a75f-d7f43d88464e] Running
	I1205 19:37:03.386916   13818 system_pods.go:61] "kube-ingress-dns-minikube" [2c2fd203-d47b-4fe5-bc28-544f23d55a61] Running
	I1205 19:37:03.386920   13818 system_pods.go:61] "kube-proxy-69z6s" [045a74a8-9584-44c6-a651-c58ff036bf8a] Running
	I1205 19:37:03.386925   13818 system_pods.go:61] "kube-scheduler-addons-489440" [495e957d-8044-4e22-8e20-455f2d3c3b96] Running
	I1205 19:37:03.386931   13818 system_pods.go:61] "metrics-server-7c66d45ddc-msjks" [5361bdf5-6fee-48ec-8911-5271ae9055e5] Running
	I1205 19:37:03.386938   13818 system_pods.go:61] "nvidia-device-plugin-daemonset-jw4c2" [2e516e12-3f41-47c1-a610-801efcb32379] Running
	I1205 19:37:03.386948   13818 system_pods.go:61] "registry-2nhwg" [1e708b27-168c-4eae-aebb-7d96da6c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:37:03.386959   13818 system_pods.go:61] "registry-proxy-wnn8h" [2f34e994-0f5a-4ee5-8faa-f0de5de7c04b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:03.386977   13818 system_pods.go:61] "snapshot-controller-58dbcc7b99-g77xf" [0935f346-3928-4760-9a36-10431ed6ce2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:37:03.386982   13818 system_pods.go:61] "snapshot-controller-58dbcc7b99-kkkvt" [37b5c9ae-1c6d-44c6-8c0b-818d39121ceb] Running
	I1205 19:37:03.386986   13818 system_pods.go:61] "storage-provisioner" [f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed] Running
	I1205 19:37:03.386995   13818 system_pods.go:61] "tiller-deploy-7b677967b9-l5vtg" [7e6cc3fe-6001-4c06-a49e-003585210abd] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1205 19:37:03.387000   13818 system_pods.go:74] duration metric: took 15.277981ms to wait for pod list to return data ...
	I1205 19:37:03.387010   13818 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:37:03.392666   13818 default_sa.go:45] found service account: "default"
	I1205 19:37:03.392685   13818 default_sa.go:55] duration metric: took 5.669609ms for default service account to be created ...
	I1205 19:37:03.392694   13818 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:37:03.405703   13818 system_pods.go:86] 18 kube-system pods found
	I1205 19:37:03.405732   13818 system_pods.go:89] "coredns-5dd5756b68-bs76k" [9235bef7-f927-40da-967d-19ee49cafa9d] Running
	I1205 19:37:03.405743   13818 system_pods.go:89] "csi-hostpath-attacher-0" [748bd69f-a0cf-49f5-8001-0ed8a15a1143] Running
	I1205 19:37:03.405749   13818 system_pods.go:89] "csi-hostpath-resizer-0" [9bfc6e31-08e6-418a-b32f-38d30424a77b] Running
	I1205 19:37:03.405761   13818 system_pods.go:89] "csi-hostpathplugin-hv64h" [e20670c4-f6aa-45f8-9821-3fd6c17ef864] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:03.405769   13818 system_pods.go:89] "etcd-addons-489440" [27f493cc-d5c2-4b03-95ad-dbcf16ed1e74] Running
	I1205 19:37:03.405777   13818 system_pods.go:89] "kube-apiserver-addons-489440" [676e77c5-9038-49a6-9987-652837182816] Running
	I1205 19:37:03.405783   13818 system_pods.go:89] "kube-controller-manager-addons-489440" [16501b70-40f4-4f63-a75f-d7f43d88464e] Running
	I1205 19:37:03.405790   13818 system_pods.go:89] "kube-ingress-dns-minikube" [2c2fd203-d47b-4fe5-bc28-544f23d55a61] Running
	I1205 19:37:03.405798   13818 system_pods.go:89] "kube-proxy-69z6s" [045a74a8-9584-44c6-a651-c58ff036bf8a] Running
	I1205 19:37:03.405805   13818 system_pods.go:89] "kube-scheduler-addons-489440" [495e957d-8044-4e22-8e20-455f2d3c3b96] Running
	I1205 19:37:03.405815   13818 system_pods.go:89] "metrics-server-7c66d45ddc-msjks" [5361bdf5-6fee-48ec-8911-5271ae9055e5] Running
	I1205 19:37:03.405823   13818 system_pods.go:89] "nvidia-device-plugin-daemonset-jw4c2" [2e516e12-3f41-47c1-a610-801efcb32379] Running
	I1205 19:37:03.405837   13818 system_pods.go:89] "registry-2nhwg" [1e708b27-168c-4eae-aebb-7d96da6c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:37:03.405852   13818 system_pods.go:89] "registry-proxy-wnn8h" [2f34e994-0f5a-4ee5-8faa-f0de5de7c04b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:03.405868   13818 system_pods.go:89] "snapshot-controller-58dbcc7b99-g77xf" [0935f346-3928-4760-9a36-10431ed6ce2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:37:03.405879   13818 system_pods.go:89] "snapshot-controller-58dbcc7b99-kkkvt" [37b5c9ae-1c6d-44c6-8c0b-818d39121ceb] Running
	I1205 19:37:03.405890   13818 system_pods.go:89] "storage-provisioner" [f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed] Running
	I1205 19:37:03.405903   13818 system_pods.go:89] "tiller-deploy-7b677967b9-l5vtg" [7e6cc3fe-6001-4c06-a49e-003585210abd] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1205 19:37:03.405912   13818 system_pods.go:126] duration metric: took 13.212255ms to wait for k8s-apps to be running ...
	I1205 19:37:03.405925   13818 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:37:03.405975   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:37:03.447688   13818 system_svc.go:56] duration metric: took 41.757926ms WaitForService to wait for kubelet.
	I1205 19:37:03.447712   13818 kubeadm.go:581] duration metric: took 51.832119007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:37:03.447736   13818 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:37:03.454072   13818 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 19:37:03.454103   13818 node_conditions.go:123] node cpu capacity is 2
	I1205 19:37:03.454120   13818 node_conditions.go:105] duration metric: took 6.378633ms to run NodePressure ...
	I1205 19:37:03.454133   13818 start.go:228] waiting for startup goroutines ...
	I1205 19:37:03.472233   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.472790   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.492104   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.706247   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.975395   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.975950   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.994200   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.214103   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.476643   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.477035   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.493082   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.706252   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.975134   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.976372   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.992519   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.209054   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.476116   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.477258   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.492924   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.706027   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.973875   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.974059   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.000195   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.205739   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.473655   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.475662   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.492738   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.706372   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.998209   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.998968   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.008334   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.206210   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.473706   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.475377   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.497417   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.706090   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.374744   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.375016   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.375054   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.376336   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.476301   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.477893   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.497239   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.706444   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.972264   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.972986   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.992565   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.208461   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.474353   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.476001   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.493131   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.706470   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.973150   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.973175   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.992349   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.206538   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.479252   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.479606   13818 kapi.go:107] duration metric: took 50.051802769s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:37:10.491981   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.709079   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.972866   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.997022   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.210399   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.472928   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.492466   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.711576   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.973043   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.993453   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.206624   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.499951   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.520408   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.153161   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.154041   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.159028   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.205922   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.473064   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.492152   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.706354   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.973305   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.992210   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.206366   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.474106   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.501900   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.707478   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.971560   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.992677   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.206300   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.476049   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.493136   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.706634   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.972485   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.992647   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.206608   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.475155   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.523574   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.707854   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.974459   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.008384   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.207627   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.481192   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.501148   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.706344   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.972480   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.992454   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.206915   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.473772   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.501051   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.706411   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.972549   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.992819   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.206249   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.472680   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.492926   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.706447   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.975602   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.017416   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.430355   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.473298   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.492828   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.705760   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.972868   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.991833   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.206235   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.473281   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.493007   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.707028   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.972966   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.994051   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.210080   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.473014   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.493095   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.707907   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.972251   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.991767   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.205344   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.473126   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.493889   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.709247   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.975034   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.992020   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.206804   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.478262   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.492558   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.710245   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.974364   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.996955   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.207354   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.477207   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.491711   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.705560   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.972443   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.992180   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.209170   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.475915   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.492489   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.707767   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.978232   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.995122   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.206080   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.480068   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:27.495360   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.706993   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.972909   13818 kapi.go:107] duration metric: took 1m7.540535763s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:37:27.992688   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.206066   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.492449   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.706113   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.993076   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.207170   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.494608   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.706246   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.992405   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.208520   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.494521   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.711686   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.993715   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.207327   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.495321   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.706656   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.993097   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.206669   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:32.493477   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.718640   13818 kapi.go:107] duration metric: took 1m9.071495911s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:37:32.720356   13818 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-489440 cluster.
	I1205 19:37:32.721822   13818 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:37:32.723220   13818 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:37:32.992522   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.494196   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.991939   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.492453   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.991488   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.493712   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.993561   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.492308   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.993127   13818 kapi.go:107] duration metric: took 1m15.597543637s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:37:36.995260   13818 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, cloud-spanner, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1205 19:37:36.996893   13818 addons.go:502] enable addons completed in 1m25.882369128s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner metrics-server helm-tiller inspektor-gadget cloud-spanner default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1205 19:37:36.996933   13818 start.go:233] waiting for cluster config update ...
	I1205 19:37:36.996952   13818 start.go:242] writing updated cluster config ...
	I1205 19:37:36.997202   13818 ssh_runner.go:195] Run: rm -f paused
	I1205 19:37:37.047987   13818 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 19:37:37.049742   13818 out.go:177] * Done! kubectl is now configured to use "addons-489440" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 19:35:27 UTC, ends at Tue 2023-12-05 19:40:26 UTC. --
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.334210280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805226334193412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=9b38fe12-9aec-44bf-b3f0-089ef861430f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.335046102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=526b6884-dcf0-4249-9bf6-ac03374daa6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.335101010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=526b6884-dcf0-4249-9bf6-ac03374daa6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.335399489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7749143397eb5a89251b3819cf1ffe5bb8aa0944b8091b60615eaba7c179b9c,PodSandboxId:4c6019201399a831ed5d1d960eb81949b5ab60de178dc6127bca659ee041d191,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-ng
inx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805012493540315,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c6p2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701ffb07-2d36-44b5-b6a9-f4939cd77c50,},Annotations:map[string]string{io.kubernetes.container.hash: 503566cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9183be2fa0a162ccea19f4e1d57733a2018f92708f75502bb426871293018,PodSandboxId:a73bf0b2c39d46eb12bace56b225fd8bc32acab5da5a718b9d7ef6830e56ee3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805010692675762,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p7scw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16adaa6e-a0c2-4c5b-82b4-055cfbc9fa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f9b1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051
e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889
323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695
f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425
667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=526b6884-dcf0-4249-9bf6-ac03374daa6d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.383540912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3ce48951-c49f-4c93-9248-abd5d95e1d51 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.383598826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3ce48951-c49f-4c93-9248-abd5d95e1d51 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.384661537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ecfca73f-b692-4f18-b55e-7bd1f7d6d034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.386032198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805226386014245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=ecfca73f-b692-4f18-b55e-7bd1f7d6d034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.387067099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33996063-3d12-426b-93ec-37325b406227 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.387118849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33996063-3d12-426b-93ec-37325b406227 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.387416261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7749143397eb5a89251b3819cf1ffe5bb8aa0944b8091b60615eaba7c179b9c,PodSandboxId:4c6019201399a831ed5d1d960eb81949b5ab60de178dc6127bca659ee041d191,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-ng
inx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805012493540315,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c6p2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701ffb07-2d36-44b5-b6a9-f4939cd77c50,},Annotations:map[string]string{io.kubernetes.container.hash: 503566cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9183be2fa0a162ccea19f4e1d57733a2018f92708f75502bb426871293018,PodSandboxId:a73bf0b2c39d46eb12bace56b225fd8bc32acab5da5a718b9d7ef6830e56ee3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805010692675762,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p7scw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16adaa6e-a0c2-4c5b-82b4-055cfbc9fa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f9b1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051
e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889
323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695
f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425
667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33996063-3d12-426b-93ec-37325b406227 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.427922330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ff9f35a0-437b-4a0b-a4e8-fc1656503c45 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.428009109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ff9f35a0-437b-4a0b-a4e8-fc1656503c45 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.429309530Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a4042d9b-b45e-496f-bf94-c20e90b6d6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.430591942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805226430577151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=a4042d9b-b45e-496f-bf94-c20e90b6d6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.431246404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=473db06a-7a6a-414a-a77e-9393a1a85c8a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.431296217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=473db06a-7a6a-414a-a77e-9393a1a85c8a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.431574927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7749143397eb5a89251b3819cf1ffe5bb8aa0944b8091b60615eaba7c179b9c,PodSandboxId:4c6019201399a831ed5d1d960eb81949b5ab60de178dc6127bca659ee041d191,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-ng
inx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805012493540315,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c6p2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701ffb07-2d36-44b5-b6a9-f4939cd77c50,},Annotations:map[string]string{io.kubernetes.container.hash: 503566cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9183be2fa0a162ccea19f4e1d57733a2018f92708f75502bb426871293018,PodSandboxId:a73bf0b2c39d46eb12bace56b225fd8bc32acab5da5a718b9d7ef6830e56ee3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805010692675762,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p7scw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16adaa6e-a0c2-4c5b-82b4-055cfbc9fa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f9b1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051
e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889
323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695
f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425
667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=473db06a-7a6a-414a-a77e-9393a1a85c8a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.473446965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e1f07e62-3fca-42e6-a0b3-e97d64cd033e name=/runtime.v1.RuntimeService/Version
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.473507833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e1f07e62-3fca-42e6-a0b3-e97d64cd033e name=/runtime.v1.RuntimeService/Version
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.474838134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=58bb3908-02cc-4420-92d3-6754f26ab5f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.476048817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805226476030575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=58bb3908-02cc-4420-92d3-6754f26ab5f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.476599761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f10d58f3-fecd-45ae-9e05-7061617d7f8b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.476644343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f10d58f3-fecd-45ae-9e05-7061617d7f8b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:40:26 addons-489440 crio[714]: time="2023-12-05 19:40:26.476989907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7749143397eb5a89251b3819cf1ffe5bb8aa0944b8091b60615eaba7c179b9c,PodSandboxId:4c6019201399a831ed5d1d960eb81949b5ab60de178dc6127bca659ee041d191,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-ng
inx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805012493540315,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-c6p2l,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701ffb07-2d36-44b5-b6a9-f4939cd77c50,},Annotations:map[string]string{io.kubernetes.container.hash: 503566cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe9183be2fa0a162ccea19f4e1d57733a2018f92708f75502bb426871293018,PodSandboxId:a73bf0b2c39d46eb12bace56b225fd8bc32acab5da5a718b9d7ef6830e56ee3a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701805010692675762,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p7scw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 16adaa6e-a0c2-4c5b-82b4-055cfbc9fa68,},Annotations:map[string]string{io.kubernetes.container.hash: 7d4f9b1a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a
562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annota
tions:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:r
egistry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051
e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889
323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695
f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425
667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f10d58f3-fecd-45ae-9e05-7061617d7f8b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ae1985394f3bb       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   64b6a247bda41       hello-world-app-5d77478584-fp699
	b5ad09ba49d21       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   eb45659ad86cd       headlamp-777fd4b855-p25zv
	f28adb15f8f71       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                              2 minutes ago       Running             nginx                     0                   ddf2dcdf860d9       nginx
	665ca839baff3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   41343de0b4b47       gcp-auth-d4c87556c-v4pj4
	9a8de113f71e7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   9a60469a89015       local-path-provisioner-78b46b4d5c-v8xhf
	a7749143397eb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   4c6019201399a       ingress-nginx-admission-patch-c6p2l
	cbe9183be2fa0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   a73bf0b2c39d4       ingress-nginx-admission-create-p7scw
	56da741b0e679       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   e47ee8c404fa7       storage-provisioner
	08af183c15119       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   22028c39f0470       kube-proxy-69z6s
	0f36356c251aa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   5fa163c440eed       coredns-5dd5756b68-bs76k
	3164ce1003424       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   64a5d6690e766       kube-controller-manager-addons-489440
	d96fc281b8835       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   c9bea52f8102c       kube-apiserver-addons-489440
	16af3ce986c39       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   4bd684f0f43cf       kube-scheduler-addons-489440
	2ef53e67c550a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   6a7a234c307e5       etcd-addons-489440
	
	* 
	* ==> coredns [0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772] <==
	* [INFO] 10.244.0.7:55687 - 64799 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001302398s
	[INFO] 10.244.0.7:36718 - 1494 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128365s
	[INFO] 10.244.0.7:36718 - 40147 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128884s
	[INFO] 10.244.0.7:40731 - 32294 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084429s
	[INFO] 10.244.0.7:40731 - 52260 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116427s
	[INFO] 10.244.0.7:59881 - 23520 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085358s
	[INFO] 10.244.0.7:59881 - 2030 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000271301s
	[INFO] 10.244.0.7:43856 - 17851 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081538s
	[INFO] 10.244.0.7:43856 - 11454 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109014s
	[INFO] 10.244.0.7:49609 - 8744 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067895s
	[INFO] 10.244.0.7:49609 - 60717 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094704s
	[INFO] 10.244.0.7:53791 - 49343 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067382s
	[INFO] 10.244.0.7:53791 - 1981 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082651s
	[INFO] 10.244.0.7:41446 - 7895 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067686s
	[INFO] 10.244.0.7:41446 - 16854 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010281s
	[INFO] 10.244.0.21:35627 - 21806 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000416384s
	[INFO] 10.244.0.21:45191 - 17360 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199205s
	[INFO] 10.244.0.21:45028 - 26672 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111045s
	[INFO] 10.244.0.21:35200 - 9190 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103176s
	[INFO] 10.244.0.21:53219 - 22294 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123171s
	[INFO] 10.244.0.21:56835 - 33792 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064469s
	[INFO] 10.244.0.21:39544 - 17888 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000577125s
	[INFO] 10.244.0.21:39586 - 52841 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000412989s
	[INFO] 10.244.0.25:36971 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000361052s
	[INFO] 10.244.0.25:45599 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000256739s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-489440
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-489440
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=addons-489440
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_35_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-489440
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:35:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-489440
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:40:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:38:33 +0000   Tue, 05 Dec 2023 19:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:38:33 +0000   Tue, 05 Dec 2023 19:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:38:33 +0000   Tue, 05 Dec 2023 19:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:38:33 +0000   Tue, 05 Dec 2023 19:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    addons-489440
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f0caae5431a4ba69508657e9c16b9d8
	  System UUID:                3f0caae5-431a-4ba6-9508-657e9c16b9d8
	  Boot ID:                    15ec9711-35f8-4678-a5f1-f3ddfbade60f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-fp699           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gadget                      gadget-78klf                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  gcp-auth                    gcp-auth-d4c87556c-v4pj4                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  headlamp                    headlamp-777fd4b855-p25zv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 coredns-5dd5756b68-bs76k                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m15s
	  kube-system                 etcd-addons-489440                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-apiserver-addons-489440               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-controller-manager-addons-489440      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-69z6s                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-scheduler-addons-489440               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  local-path-storage          local-path-provisioner-78b46b4d5c-v8xhf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m57s  kube-proxy       
	  Normal  Starting                 4m28s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s  kubelet          Node addons-489440 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s  kubelet          Node addons-489440 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s  kubelet          Node addons-489440 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m27s  kubelet          Node addons-489440 status is now: NodeReady
	  Normal  RegisteredNode           4m16s  node-controller  Node addons-489440 event: Registered Node addons-489440 in Controller
	
	* 
	* ==> dmesg <==
	* [  +3.431563] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155710] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.021639] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.377191] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.111156] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.144004] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.099375] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.231125] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +8.895945] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +8.781027] systemd-fstab-generator[1242]: Ignoring "noauto" for root device
	[Dec 5 19:36] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.595135] kauditd_printk_skb: 4 callbacks suppressed
	[ +22.334495] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.114683] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 5 19:37] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.594898] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.464956] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.087008] kauditd_printk_skb: 22 callbacks suppressed
	[Dec 5 19:38] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.108139] kauditd_printk_skb: 7 callbacks suppressed
	[ +19.392247] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 5 19:40] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557] <==
	* {"level":"info","ts":"2023-12-05T19:37:20.424217Z","caller":"traceutil/trace.go:171","msg":"trace[1886865848] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"407.372444ms","start":"2023-12-05T19:37:20.016838Z","end":"2023-12-05T19:37:20.424211Z","steps":["trace[1886865848] 'process raft request'  (duration: 407.066103ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:20.424317Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T19:37:20.01682Z","time spent":"407.439375ms","remote":"127.0.0.1:48214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3395,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-hd2h6\" mod_revision:1049 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-hd2h6\" value_size:3336 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-hd2h6\" > >"}
	{"level":"warn","ts":"2023-12-05T19:37:20.424353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.853062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-05T19:37:20.424424Z","caller":"traceutil/trace.go:171","msg":"trace[423503306] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:1052; }","duration":"221.932048ms","start":"2023-12-05T19:37:20.202483Z","end":"2023-12-05T19:37:20.424415Z","steps":["trace[423503306] 'agreement among raft nodes before linearized reading'  (duration: 221.813129ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:20.424566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.380005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11289"}
	{"level":"info","ts":"2023-12-05T19:37:20.424666Z","caller":"traceutil/trace.go:171","msg":"trace[2122425548] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1052; }","duration":"223.48127ms","start":"2023-12-05T19:37:20.201178Z","end":"2023-12-05T19:37:20.424659Z","steps":["trace[2122425548] 'agreement among raft nodes before linearized reading'  (duration: 223.341639ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.196039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.794013ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17375885974416719506 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" value_size:541 lease:8152513937561942651 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T19:37:53.196176Z","caller":"traceutil/trace.go:171","msg":"trace[1640709176] linearizableReadLoop","detail":"{readStateIndex:1332; appliedIndex:1331; }","duration":"258.206067ms","start":"2023-12-05T19:37:52.937954Z","end":"2023-12-05T19:37:53.19616Z","steps":["trace[1640709176] 'read index received'  (duration: 89.079787ms)","trace[1640709176] 'applied index is now lower than readState.Index'  (duration: 169.124959ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:37:53.1964Z","caller":"traceutil/trace.go:171","msg":"trace[631264953] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"349.912594ms","start":"2023-12-05T19:37:52.846464Z","end":"2023-12-05T19:37:53.196376Z","steps":["trace[631264953] 'process raft request'  (duration: 180.606596ms)","trace[631264953] 'compare'  (duration: 168.465432ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T19:37:53.196458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T19:37:52.84645Z","time spent":"349.977441ms","remote":"127.0.0.1:48186","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":614,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" value_size:541 lease:8152513937561942651 >> failure:<>"}
	{"level":"warn","ts":"2023-12-05T19:37:53.196616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.651416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2023-12-05T19:37:53.196634Z","caller":"traceutil/trace.go:171","msg":"trace[1166477007] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1290; }","duration":"258.701138ms","start":"2023-12-05T19:37:52.937927Z","end":"2023-12-05T19:37:53.196628Z","steps":["trace[1166477007] 'agreement among raft nodes before linearized reading'  (duration: 258.62355ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.197326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.69657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-proxy-wnn8h\" ","response":"range_response_count:1 size:3871"}
	{"level":"info","ts":"2023-12-05T19:37:53.197384Z","caller":"traceutil/trace.go:171","msg":"trace[1769871096] range","detail":"{range_begin:/registry/pods/kube-system/registry-proxy-wnn8h; range_end:; response_count:1; response_revision:1290; }","duration":"232.761266ms","start":"2023-12-05T19:37:52.964615Z","end":"2023-12-05T19:37:53.197376Z","steps":["trace[1769871096] 'agreement among raft nodes before linearized reading'  (duration: 232.619944ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.197551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.437686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" ","response":"range_response_count:1 size:8026"}
	{"level":"info","ts":"2023-12-05T19:37:53.197607Z","caller":"traceutil/trace.go:171","msg":"trace[1323183814] range","detail":"{range_begin:/registry/pods/gadget/; range_end:/registry/pods/gadget0; response_count:1; response_revision:1290; }","duration":"105.497239ms","start":"2023-12-05T19:37:53.092103Z","end":"2023-12-05T19:37:53.197601Z","steps":["trace[1323183814] 'agreement among raft nodes before linearized reading'  (duration: 105.409375ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.197822Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.203917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/nginx\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T19:37:53.197943Z","caller":"traceutil/trace.go:171","msg":"trace[868965987] range","detail":"{range_begin:/registry/pods/default/nginx; range_end:; response_count:0; response_revision:1290; }","duration":"182.327799ms","start":"2023-12-05T19:37:53.015609Z","end":"2023-12-05T19:37:53.197936Z","steps":["trace[868965987] 'agreement among raft nodes before linearized reading'  (duration: 182.189371ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.198073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.862595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-05T19:37:53.198111Z","caller":"traceutil/trace.go:171","msg":"trace[1477530667] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1290; }","duration":"219.903232ms","start":"2023-12-05T19:37:52.978202Z","end":"2023-12-05T19:37:53.198105Z","steps":["trace[1477530667] 'agreement among raft nodes before linearized reading'  (duration: 219.841613ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:38:01.881238Z","caller":"traceutil/trace.go:171","msg":"trace[418673253] linearizableReadLoop","detail":"{readStateIndex:1422; appliedIndex:1421; }","duration":"143.438738ms","start":"2023-12-05T19:38:01.737786Z","end":"2023-12-05T19:38:01.881225Z","steps":["trace[418673253] 'read index received'  (duration: 143.283808ms)","trace[418673253] 'applied index is now lower than readState.Index'  (duration: 154.463µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T19:38:01.881486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.776615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-12-05T19:38:01.881547Z","caller":"traceutil/trace.go:171","msg":"trace[1754305013] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1378; }","duration":"143.852293ms","start":"2023-12-05T19:38:01.737685Z","end":"2023-12-05T19:38:01.881538Z","steps":["trace[1754305013] 'agreement among raft nodes before linearized reading'  (duration: 143.731227ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:38:01.881698Z","caller":"traceutil/trace.go:171","msg":"trace[1934344253] transaction","detail":"{read_only:false; response_revision:1378; number_of_response:1; }","duration":"258.040628ms","start":"2023-12-05T19:38:01.623651Z","end":"2023-12-05T19:38:01.881691Z","steps":["trace[1934344253] 'process raft request'  (duration: 257.462618ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:38:15.608136Z","caller":"traceutil/trace.go:171","msg":"trace[885706186] transaction","detail":"{read_only:false; response_revision:1428; number_of_response:1; }","duration":"213.577061ms","start":"2023-12-05T19:38:15.394529Z","end":"2023-12-05T19:38:15.608106Z","steps":["trace[885706186] 'process raft request'  (duration: 213.469345ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7] <==
	* 2023/12/05 19:37:32 GCP Auth Webhook started!
	2023/12/05 19:37:37 Ready to marshal response ...
	2023/12/05 19:37:37 Ready to write response ...
	2023/12/05 19:37:37 Ready to marshal response ...
	2023/12/05 19:37:37 Ready to write response ...
	2023/12/05 19:37:46 Ready to marshal response ...
	2023/12/05 19:37:46 Ready to write response ...
	2023/12/05 19:37:47 Ready to marshal response ...
	2023/12/05 19:37:47 Ready to write response ...
	2023/12/05 19:37:53 Ready to marshal response ...
	2023/12/05 19:37:53 Ready to write response ...
	2023/12/05 19:37:56 Ready to marshal response ...
	2023/12/05 19:37:56 Ready to write response ...
	2023/12/05 19:37:56 Ready to marshal response ...
	2023/12/05 19:37:56 Ready to write response ...
	2023/12/05 19:37:56 Ready to marshal response ...
	2023/12/05 19:37:56 Ready to write response ...
	2023/12/05 19:38:10 Ready to marshal response ...
	2023/12/05 19:38:10 Ready to write response ...
	2023/12/05 19:38:13 Ready to marshal response ...
	2023/12/05 19:38:13 Ready to write response ...
	2023/12/05 19:38:26 Ready to marshal response ...
	2023/12/05 19:38:26 Ready to write response ...
	2023/12/05 19:40:15 Ready to marshal response ...
	2023/12/05 19:40:15 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:40:26 up 5 min,  0 users,  load average: 0.67, 1.77, 0.94
	Linux addons-489440 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991] <==
	* I1205 19:37:53.345833       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.145.37"}
	I1205 19:37:56.088994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.124.254"}
	I1205 19:38:00.183938       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1205 19:38:18.574570       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.118:8443->10.244.0.29:36362: read: connection reset by peer
	I1205 19:38:22.251263       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 19:38:42.744498       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.744692       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.768215       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.768328       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.782957       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.783022       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.797930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.798963       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.805649       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.806197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.820991       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.822371       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.836595       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.837079       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.852002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.852059       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:38:43.798856       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:38:43.852323       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:38:43.862777       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:40:15.462573       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.76.59"}
	
	* 
	* ==> kube-controller-manager [3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9] <==
	* I1205 19:39:11.613607       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1205 19:39:11.613665       1 shared_informer.go:318] Caches are synced for garbage collector
	W1205 19:39:14.080700       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:14.080854       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:39:19.171370       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:19.171437       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:39:25.047938       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:25.048005       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:39:48.037830       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:39:48.038072       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:40:00.137889       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:40:00.137947       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:40:06.637045       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:40:06.637144       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1205 19:40:15.217789       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1205 19:40:15.260547       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-fp699"
	I1205 19:40:15.275100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="58.200004ms"
	I1205 19:40:15.286051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.961637ms"
	I1205 19:40:15.286170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.945µs"
	I1205 19:40:15.305061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.363µs"
	I1205 19:40:18.361592       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1205 19:40:18.370837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.19µs"
	I1205 19:40:18.386363       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1205 19:40:18.584239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.043475ms"
	I1205 19:40:18.584336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.868µs"
	
	* 
	* ==> kube-proxy [08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48] <==
	* I1205 19:36:28.976903       1 server_others.go:69] "Using iptables proxy"
	I1205 19:36:29.105915       1 node.go:141] Successfully retrieved node IP: 192.168.39.118
	I1205 19:36:29.594233       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 19:36:29.594280       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:36:29.625838       1 server_others.go:152] "Using iptables Proxier"
	I1205 19:36:29.625973       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 19:36:29.626347       1 server.go:846] "Version info" version="v1.28.4"
	I1205 19:36:29.626422       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:36:29.653843       1 config.go:188] "Starting service config controller"
	I1205 19:36:29.653954       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 19:36:29.654066       1 config.go:97] "Starting endpoint slice config controller"
	I1205 19:36:29.654120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 19:36:29.675051       1 config.go:315] "Starting node config controller"
	I1205 19:36:29.675214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 19:36:29.766915       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 19:36:29.774044       1 shared_informer.go:318] Caches are synced for service config
	I1205 19:36:29.776793       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a] <==
	* E1205 19:35:55.296377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:35:55.296387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:35:56.102000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:35:56.102104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:35:56.142592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:56.142831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:56.242471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:35:56.242531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 19:35:56.290559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:56.290644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:56.321003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:35:56.321052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:35:56.358373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:35:56.358424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:35:56.368989       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:35:56.369097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:35:56.401269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:35:56.401402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:35:56.422335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:35:56.422419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 19:35:56.583641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:35:56.583806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:35:56.604049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:56.604108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1205 19:35:58.284637       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 19:35:27 UTC, ends at Tue 2023-12-05 19:40:27 UTC. --
	Dec 05 19:40:17 addons-489440 kubelet[1249]: I1205 19:40:17.545592    1249 scope.go:117] "RemoveContainer" containerID="21859c6a8aa718fb81103259e66b8e6aff83004815a60b09811e871128201c92"
	Dec 05 19:40:18 addons-489440 kubelet[1249]: I1205 19:40:18.860478    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="16adaa6e-a0c2-4c5b-82b4-055cfbc9fa68" path="/var/lib/kubelet/pods/16adaa6e-a0c2-4c5b-82b4-055cfbc9fa68/volumes"
	Dec 05 19:40:18 addons-489440 kubelet[1249]: I1205 19:40:18.861021    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c2fd203-d47b-4fe5-bc28-544f23d55a61" path="/var/lib/kubelet/pods/2c2fd203-d47b-4fe5-bc28-544f23d55a61/volumes"
	Dec 05 19:40:18 addons-489440 kubelet[1249]: I1205 19:40:18.861378    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="701ffb07-2d36-44b5-b6a9-f4939cd77c50" path="/var/lib/kubelet/pods/701ffb07-2d36-44b5-b6a9-f4939cd77c50/volumes"
	Dec 05 19:40:18 addons-489440 kubelet[1249]: I1205 19:40:18.885773    1249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-fp699" podStartSLOduration=2.578250181 podCreationTimestamp="2023-12-05 19:40:15 +0000 UTC" firstStartedPulling="2023-12-05 19:40:16.461856612 +0000 UTC m=+257.755807586" lastFinishedPulling="2023-12-05 19:40:17.769142381 +0000 UTC m=+259.063093353" observedRunningTime="2023-12-05 19:40:18.569256571 +0000 UTC m=+259.863207562" watchObservedRunningTime="2023-12-05 19:40:18.885535948 +0000 UTC m=+260.179486936"
	Dec 05 19:40:19 addons-489440 kubelet[1249]: E1205 19:40:19.210540    1249 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:40:19 addons-489440 kubelet[1249]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:40:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:19 addons-489440 kubelet[1249]:         time="2023-12-05T19:40:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:19 addons-489440 kubelet[1249]:         time="2023-12-05T19:40:19Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:40:19 addons-489440 kubelet[1249]:         time="2023-12-05T19:40:19Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:40:19 addons-489440 kubelet[1249]:  > podSandboxID="63c041f67019db62b09726892e0e2188a8fc341f92effaacb9d276c8e5b04d39"
	Dec 05 19:40:19 addons-489440 kubelet[1249]: E1205 19:40:19.210814    1249 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6rs8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-78klf_gadget(071e9d7c-a5e8-4d75-add6-8f136264b190): CreateContainerError: container create failed: time="2023-12-05T19:40:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:19 addons-489440 kubelet[1249]: time="2023-12-05T19:40:19Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:40:19 addons-489440 kubelet[1249]: time="2023-12-05T19:40:19Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:40:19 addons-489440 kubelet[1249]: time="2023-12-05T19:40:19Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:40:19 addons-489440 kubelet[1249]: E1205 19:40:19.210861    1249 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:40:19Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:40:19Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:40:19Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:40:19Z\\\" level=error msg=\\\"runc create failed: unable to start container process: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-78klf" podUID="071e9d7c-a5e8-4d75-add6-8f136264b190"
	Dec 05 19:40:21 addons-489440 kubelet[1249]: I1205 19:40:21.776829    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltj6r\" (UniqueName: \"kubernetes.io/projected/02fee9fd-f45a-4d31-8fb1-0b92c5837678-kube-api-access-ltj6r\") pod \"02fee9fd-f45a-4d31-8fb1-0b92c5837678\" (UID: \"02fee9fd-f45a-4d31-8fb1-0b92c5837678\") "
	Dec 05 19:40:21 addons-489440 kubelet[1249]: I1205 19:40:21.776881    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02fee9fd-f45a-4d31-8fb1-0b92c5837678-webhook-cert\") pod \"02fee9fd-f45a-4d31-8fb1-0b92c5837678\" (UID: \"02fee9fd-f45a-4d31-8fb1-0b92c5837678\") "
	Dec 05 19:40:21 addons-489440 kubelet[1249]: I1205 19:40:21.779654    1249 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fee9fd-f45a-4d31-8fb1-0b92c5837678-kube-api-access-ltj6r" (OuterVolumeSpecName: "kube-api-access-ltj6r") pod "02fee9fd-f45a-4d31-8fb1-0b92c5837678" (UID: "02fee9fd-f45a-4d31-8fb1-0b92c5837678"). InnerVolumeSpecName "kube-api-access-ltj6r". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 05 19:40:21 addons-489440 kubelet[1249]: I1205 19:40:21.780620    1249 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02fee9fd-f45a-4d31-8fb1-0b92c5837678-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "02fee9fd-f45a-4d31-8fb1-0b92c5837678" (UID: "02fee9fd-f45a-4d31-8fb1-0b92c5837678"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:40:21 addons-489440 kubelet[1249]: I1205 19:40:21.877918    1249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ltj6r\" (UniqueName: \"kubernetes.io/projected/02fee9fd-f45a-4d31-8fb1-0b92c5837678-kube-api-access-ltj6r\") on node \"addons-489440\" DevicePath \"\""
	Dec 05 19:40:21 addons-489440 kubelet[1249]: I1205 19:40:21.877953    1249 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02fee9fd-f45a-4d31-8fb1-0b92c5837678-webhook-cert\") on node \"addons-489440\" DevicePath \"\""
	Dec 05 19:40:22 addons-489440 kubelet[1249]: I1205 19:40:22.576382    1249 scope.go:117] "RemoveContainer" containerID="db7348a6052a5dfd0688628eb1f56cfbf0674012a542454f109b6f5ea800c603"
	Dec 05 19:40:22 addons-489440 kubelet[1249]: E1205 19:40:22.595016    1249 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod02fee9fd-f45a-4d31-8fb1-0b92c5837678/crio-7b2a525cf02cecdb2442c703bb53c5a24f5adb951d6a1bee050277aa218272b4\": RecentStats: unable to find data in memory cache]"
	Dec 05 19:40:22 addons-489440 kubelet[1249]: I1205 19:40:22.861252    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="02fee9fd-f45a-4d31-8fb1-0b92c5837678" path="/var/lib/kubelet/pods/02fee9fd-f45a-4d31-8fb1-0b92c5837678/volumes"
	
	* 
	* ==> storage-provisioner [56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee] <==
	* I1205 19:36:32.926014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:36:33.131438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:36:33.131531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:36:33.222291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28a48f8f-9a40-4c83-9762-47cbb70f03c4", APIVersion:"v1", ResourceVersion:"832", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-489440_bcd1d510-b3d5-4fda-ae7b-0c5df7b93e41 became leader
	I1205 19:36:33.222425       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:36:33.234546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-489440_bcd1d510-b3d5-4fda-ae7b-0c5df7b93e41!
	I1205 19:36:33.437109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-489440_bcd1d510-b3d5-4fda-ae7b-0c5df7b93e41!
	E1205 19:38:35.115815       1 controller.go:1050] claim "a0836ea5-43b5-48bb-8971-f863de02e22c" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489440 -n addons-489440
helpers_test.go:261: (dbg) Run:  kubectl --context addons-489440 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-78klf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-489440 describe pod gadget-78klf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-489440 describe pod gadget-78klf: exit status 1 (69.346412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-78klf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-489440 describe pod gadget-78klf: exit status 1
--- FAIL: TestAddons/parallel/Ingress (155.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (482.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-78klf" [071e9d7c-a5e8-4d75-add6-8f136264b190] Pending / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:837: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489440 -n addons-489440
addons_test.go:837: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-12-05 19:45:47.364052567 +0000 UTC m=+660.176512206
addons_test.go:837: (dbg) Run:  kubectl --context addons-489440 describe po gadget-78klf -n gadget
addons_test.go:837: (dbg) kubectl --context addons-489440 describe po gadget-78klf -n gadget:
Name:             gadget-78klf
Namespace:        gadget
Priority:         0
Service Account:  gadget
Node:             addons-489440/192.168.39.118
Start Time:       Tue, 05 Dec 2023 19:36:19 +0000
Labels:           controller-revision-hash=5d55b57d4c
k8s-app=gadget
pod-template-generation=1
Annotations:      container.apparmor.security.beta.kubernetes.io/gadget: unconfined
inspektor-gadget.kinvolk.io/option-hook-mode: auto
Status:           Pending
IP:               192.168.39.118
IPs:
IP:           192.168.39.118
Controlled By:  DaemonSet/gadget
Containers:
gadget:
Container ID:  
Image:         ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/entrypoint.sh
State:          Waiting
Reason:       CreateContainerError
Ready:          False
Restart Count:  0
Liveness:       exec [/bin/gadgettracermanager -liveness] delay=0s timeout=2s period=5s #success=1 #failure=3
Readiness:      exec [/bin/gadgettracermanager -liveness] delay=0s timeout=2s period=5s #success=1 #failure=3
Environment:
NODE_NAME:                                       (v1:spec.nodeName)
GADGET_POD_UID:                                  (v1:metadata.uid)
TRACELOOP_NODE_NAME:                             (v1:spec.nodeName)
TRACELOOP_POD_NAME:                             gadget-78klf (v1:metadata.name)
TRACELOOP_POD_NAMESPACE:                        gadget (v1:metadata.namespace)
GADGET_IMAGE:                                   ghcr.io/inspektor-gadget/inspektor-gadget
INSPEKTOR_GADGET_VERSION:                       v0.16.1
INSPEKTOR_GADGET_OPTION_HOOK_MODE:              auto
INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER:  true
INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH:         /run/containerd/containerd.sock
INSPEKTOR_GADGET_CRIO_SOCKETPATH:               /run/crio/crio.sock
INSPEKTOR_GADGET_DOCKER_SOCKETPATH:             /run/docker.sock
HOST_ROOT:                                      /host
Mounts:
/host from host (rw)
/lib/modules from modules (rw)
/run from run (rw)
/sys/fs/bpf from bpffs (rw)
/sys/fs/cgroup from cgroup (rw)
/sys/kernel/debug from debugfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6rs8w (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
host:
Type:          HostPath (bare host directory volume)
Path:          /
HostPathType:  
run:
Type:          HostPath (bare host directory volume)
Path:          /run
HostPathType:  
cgroup:
Type:          HostPath (bare host directory volume)
Path:          /sys/fs/cgroup
HostPathType:  
modules:
Type:          HostPath (bare host directory volume)
Path:          /lib/modules
HostPathType:  
bpffs:
Type:          HostPath (bare host directory volume)
Path:          /sys/fs/bpf
HostPathType:  
debugfs:
Type:          HostPath (bare host directory volume)
Path:          /sys/kernel/debug
HostPathType:  
kube-api-access-6rs8w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 :NoSchedule op=Exists
:NoExecute op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type     Reason     Age    From               Message
----     ------     ----   ----               -------
Normal   Scheduled  9m28s  default-scheduler  Successfully assigned gadget/gadget-78klf to addons-489440
Normal   Pulled     9m8s   kubelet            Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 7.913s (8.722s including waiting)
Warning  Failed     9m8s   kubelet            Error: container create failed: time="2023-12-05T19:36:39Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:39Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:36:39Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:36:39Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  8m31s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 205ms (36.708s including waiting)
Warning  Failed  8m30s  kubelet  Error: container create failed: time="2023-12-05T19:37:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:17Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:17Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:17Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Warning  Failed  8m21s  kubelet  Error: container create failed: time="2023-12-05T19:37:26Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:26Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:26Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:26Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  8m21s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 177ms (9.203s including waiting)
Normal   Pulled  8m7s   kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 185ms (185ms including waiting)
Warning  Failed  8m7s   kubelet  Error: container create failed: time="2023-12-05T19:37:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:40Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:40Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  7m50s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 261ms (2.986s including waiting)
Warning  Failed  7m49s  kubelet  Error: container create failed: time="2023-12-05T19:37:57Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:57Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:37:57Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:37:57Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  7m31s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 193ms (4.582s including waiting)
Warning  Failed  7m31s  kubelet  Error: container create failed: time="2023-12-05T19:38:16Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:16Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:16Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:16Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  7m17s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 181ms (181ms including waiting)
Warning  Failed  7m17s  kubelet  Error: container create failed: time="2023-12-05T19:38:30Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:30Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:30Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:30Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal   Pulled  7m1s  kubelet  Successfully pulled image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931" in 237ms (237ms including waiting)
Warning  Failed  7m1s  kubelet  Error: container create failed: time="2023-12-05T19:38:46Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:46Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
time="2023-12-05T19:38:46Z" level=warning msg="lstat : no such file or directory"
time="2023-12-05T19:38:46Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
Normal  Pulling  4m10s (x21 over 9m16s)  kubelet  Pulling image "ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931"
addons_test.go:837: (dbg) Run:  kubectl --context addons-489440 logs gadget-78klf -n gadget
addons_test.go:837: (dbg) Non-zero exit: kubectl --context addons-489440 logs gadget-78klf -n gadget: exit status 1 (84.836213ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "gadget" in pod "gadget-78klf" is waiting to start: CreateContainerError

                                                
                                                
** /stderr **
addons_test.go:837: kubectl --context addons-489440 logs gadget-78klf -n gadget: exit status 1
addons_test.go:838: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-489440 -n addons-489440
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-489440 logs -n 25: (1.424520137s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |                     |
	|         | -p download-only-103789                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | -p download-only-103789                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-103789                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| delete  | -p download-only-103789                                                                     | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-109311 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | binary-mirror-109311                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35295                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-109311                                                                     | binary-mirror-109311 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:35 UTC |
	| addons  | enable dashboard -p                                                                         | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-489440                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |                     |
	|         | addons-489440                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-489440 --wait=true                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC | 05 Dec 23 19:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489440 addons                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489440 ssh cat                                                                       | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | /opt/local-path-provisioner/pvc-fb2b2dea-9f18-4d7a-86cd-fd40e7f776f4_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | -p addons-489440                                                                            |                      |         |         |                     |                     |
	| ip      | addons-489440 ip                                                                            | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | addons-489440                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:37 UTC | 05 Dec 23 19:37 UTC |
	|         | -p addons-489440                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-489440 ssh curl -s                                                                   | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489440 addons                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-489440 addons                                                                        | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:38 UTC | 05 Dec 23 19:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-489440 ip                                                                            | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-489440 addons disable                                                                | addons-489440        | jenkins | v1.32.0 | 05 Dec 23 19:40 UTC | 05 Dec 23 19:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:14.159744   13818 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:14.159863   13818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:14.159869   13818 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:14.159876   13818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:14.160054   13818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 19:35:14.160709   13818 out.go:303] Setting JSON to false
	I1205 19:35:14.161493   13818 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1067,"bootTime":1701803847,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:14.161550   13818 start.go:138] virtualization: kvm guest
	I1205 19:35:14.163788   13818 out.go:177] * [addons-489440] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:14.166078   13818 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:35:14.166040   13818 notify.go:220] Checking for updates...
	I1205 19:35:14.167528   13818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:14.169231   13818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:35:14.170870   13818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:14.172298   13818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:35:14.173697   13818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:35:14.175234   13818 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:35:14.206373   13818 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:35:14.207768   13818 start.go:298] selected driver: kvm2
	I1205 19:35:14.207784   13818 start.go:902] validating driver "kvm2" against <nil>
	I1205 19:35:14.207795   13818 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:35:14.208816   13818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:14.208905   13818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:35:14.223163   13818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 19:35:14.223258   13818 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:35:14.223480   13818 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:35:14.223529   13818 cni.go:84] Creating CNI manager for ""
	I1205 19:35:14.223537   13818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:14.223547   13818 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:35:14.223554   13818 start_flags.go:323] config:
	{Name:addons-489440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:14.223678   13818 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:14.225732   13818 out.go:177] * Starting control plane node addons-489440 in cluster addons-489440
	I1205 19:35:14.227282   13818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:14.227325   13818 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:14.227332   13818 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:14.227425   13818 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 19:35:14.227439   13818 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 19:35:14.227738   13818 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/config.json ...
	I1205 19:35:14.227762   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/config.json: {Name:mka12c39246080142bf01600aa551525066e8634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:14.227917   13818 start.go:365] acquiring machines lock for addons-489440: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:35:14.227986   13818 start.go:369] acquired machines lock for "addons-489440" in 51.373µs
	I1205 19:35:14.228016   13818 start.go:93] Provisioning new machine with config: &{Name:addons-489440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:35:14.228071   13818 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:35:14.229883   13818 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 19:35:14.230032   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:35:14.230075   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:35:14.244307   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I1205 19:35:14.244802   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:35:14.245379   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:35:14.245404   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:35:14.245735   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:35:14.245901   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:14.246015   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:14.246148   13818 start.go:159] libmachine.API.Create for "addons-489440" (driver="kvm2")
	I1205 19:35:14.246178   13818 client.go:168] LocalClient.Create starting
	I1205 19:35:14.246225   13818 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem
	I1205 19:35:14.338616   13818 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem
	I1205 19:35:14.514178   13818 main.go:141] libmachine: Running pre-create checks...
	I1205 19:35:14.514205   13818 main.go:141] libmachine: (addons-489440) Calling .PreCreateCheck
	I1205 19:35:14.514731   13818 main.go:141] libmachine: (addons-489440) Calling .GetConfigRaw
	I1205 19:35:14.515125   13818 main.go:141] libmachine: Creating machine...
	I1205 19:35:14.515140   13818 main.go:141] libmachine: (addons-489440) Calling .Create
	I1205 19:35:14.515309   13818 main.go:141] libmachine: (addons-489440) Creating KVM machine...
	I1205 19:35:14.516461   13818 main.go:141] libmachine: (addons-489440) DBG | found existing default KVM network
	I1205 19:35:14.517360   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.517194   13840 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I1205 19:35:14.523271   13818 main.go:141] libmachine: (addons-489440) DBG | trying to create private KVM network mk-addons-489440 192.168.39.0/24...
	I1205 19:35:14.590428   13818 main.go:141] libmachine: (addons-489440) DBG | private KVM network mk-addons-489440 192.168.39.0/24 created
	I1205 19:35:14.590454   13818 main.go:141] libmachine: (addons-489440) Setting up store path in /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440 ...
	I1205 19:35:14.590467   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.590404   13840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:14.590491   13818 main.go:141] libmachine: (addons-489440) Building disk image from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 19:35:14.590582   13818 main.go:141] libmachine: (addons-489440) Downloading /home/jenkins/minikube-integration/17731-6237/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1205 19:35:14.810034   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.809902   13840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa...
	I1205 19:35:14.920613   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.920447   13840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/addons-489440.rawdisk...
	I1205 19:35:14.920663   13818 main.go:141] libmachine: (addons-489440) DBG | Writing magic tar header
	I1205 19:35:14.920680   13818 main.go:141] libmachine: (addons-489440) DBG | Writing SSH key tar header
	I1205 19:35:14.920694   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:14.920582   13840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440 ...
	I1205 19:35:14.920711   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440
	I1205 19:35:14.920727   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines
	I1205 19:35:14.920747   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440 (perms=drwx------)
	I1205 19:35:14.920769   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:14.920780   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237
	I1205 19:35:14.920791   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:35:14.920808   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:35:14.920828   13818 main.go:141] libmachine: (addons-489440) DBG | Checking permissions on dir: /home
	I1205 19:35:14.920841   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:35:14.920850   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube (perms=drwxr-xr-x)
	I1205 19:35:14.920858   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237 (perms=drwxrwxr-x)
	I1205 19:35:14.920868   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:35:14.920875   13818 main.go:141] libmachine: (addons-489440) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:35:14.920883   13818 main.go:141] libmachine: (addons-489440) Creating domain...
	I1205 19:35:14.920890   13818 main.go:141] libmachine: (addons-489440) DBG | Skipping /home - not owner
	I1205 19:35:14.921921   13818 main.go:141] libmachine: (addons-489440) define libvirt domain using xml: 
	I1205 19:35:14.921943   13818 main.go:141] libmachine: (addons-489440) <domain type='kvm'>
	I1205 19:35:14.921954   13818 main.go:141] libmachine: (addons-489440)   <name>addons-489440</name>
	I1205 19:35:14.921964   13818 main.go:141] libmachine: (addons-489440)   <memory unit='MiB'>4000</memory>
	I1205 19:35:14.921973   13818 main.go:141] libmachine: (addons-489440)   <vcpu>2</vcpu>
	I1205 19:35:14.921982   13818 main.go:141] libmachine: (addons-489440)   <features>
	I1205 19:35:14.921988   13818 main.go:141] libmachine: (addons-489440)     <acpi/>
	I1205 19:35:14.922002   13818 main.go:141] libmachine: (addons-489440)     <apic/>
	I1205 19:35:14.922008   13818 main.go:141] libmachine: (addons-489440)     <pae/>
	I1205 19:35:14.922016   13818 main.go:141] libmachine: (addons-489440)     
	I1205 19:35:14.922024   13818 main.go:141] libmachine: (addons-489440)   </features>
	I1205 19:35:14.922034   13818 main.go:141] libmachine: (addons-489440)   <cpu mode='host-passthrough'>
	I1205 19:35:14.922042   13818 main.go:141] libmachine: (addons-489440)   
	I1205 19:35:14.922057   13818 main.go:141] libmachine: (addons-489440)   </cpu>
	I1205 19:35:14.922070   13818 main.go:141] libmachine: (addons-489440)   <os>
	I1205 19:35:14.922084   13818 main.go:141] libmachine: (addons-489440)     <type>hvm</type>
	I1205 19:35:14.922108   13818 main.go:141] libmachine: (addons-489440)     <boot dev='cdrom'/>
	I1205 19:35:14.922119   13818 main.go:141] libmachine: (addons-489440)     <boot dev='hd'/>
	I1205 19:35:14.922125   13818 main.go:141] libmachine: (addons-489440)     <bootmenu enable='no'/>
	I1205 19:35:14.922149   13818 main.go:141] libmachine: (addons-489440)   </os>
	I1205 19:35:14.922177   13818 main.go:141] libmachine: (addons-489440)   <devices>
	I1205 19:35:14.922188   13818 main.go:141] libmachine: (addons-489440)     <disk type='file' device='cdrom'>
	I1205 19:35:14.922200   13818 main.go:141] libmachine: (addons-489440)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/boot2docker.iso'/>
	I1205 19:35:14.922211   13818 main.go:141] libmachine: (addons-489440)       <target dev='hdc' bus='scsi'/>
	I1205 19:35:14.922219   13818 main.go:141] libmachine: (addons-489440)       <readonly/>
	I1205 19:35:14.922226   13818 main.go:141] libmachine: (addons-489440)     </disk>
	I1205 19:35:14.922236   13818 main.go:141] libmachine: (addons-489440)     <disk type='file' device='disk'>
	I1205 19:35:14.922245   13818 main.go:141] libmachine: (addons-489440)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:35:14.922256   13818 main.go:141] libmachine: (addons-489440)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/addons-489440.rawdisk'/>
	I1205 19:35:14.922262   13818 main.go:141] libmachine: (addons-489440)       <target dev='hda' bus='virtio'/>
	I1205 19:35:14.922293   13818 main.go:141] libmachine: (addons-489440)     </disk>
	I1205 19:35:14.922306   13818 main.go:141] libmachine: (addons-489440)     <interface type='network'>
	I1205 19:35:14.922318   13818 main.go:141] libmachine: (addons-489440)       <source network='mk-addons-489440'/>
	I1205 19:35:14.922329   13818 main.go:141] libmachine: (addons-489440)       <model type='virtio'/>
	I1205 19:35:14.922342   13818 main.go:141] libmachine: (addons-489440)     </interface>
	I1205 19:35:14.922354   13818 main.go:141] libmachine: (addons-489440)     <interface type='network'>
	I1205 19:35:14.922368   13818 main.go:141] libmachine: (addons-489440)       <source network='default'/>
	I1205 19:35:14.922379   13818 main.go:141] libmachine: (addons-489440)       <model type='virtio'/>
	I1205 19:35:14.922391   13818 main.go:141] libmachine: (addons-489440)     </interface>
	I1205 19:35:14.922407   13818 main.go:141] libmachine: (addons-489440)     <serial type='pty'>
	I1205 19:35:14.922418   13818 main.go:141] libmachine: (addons-489440)       <target port='0'/>
	I1205 19:35:14.922426   13818 main.go:141] libmachine: (addons-489440)     </serial>
	I1205 19:35:14.922434   13818 main.go:141] libmachine: (addons-489440)     <console type='pty'>
	I1205 19:35:14.922440   13818 main.go:141] libmachine: (addons-489440)       <target type='serial' port='0'/>
	I1205 19:35:14.922448   13818 main.go:141] libmachine: (addons-489440)     </console>
	I1205 19:35:14.922455   13818 main.go:141] libmachine: (addons-489440)     <rng model='virtio'>
	I1205 19:35:14.922464   13818 main.go:141] libmachine: (addons-489440)       <backend model='random'>/dev/random</backend>
	I1205 19:35:14.922471   13818 main.go:141] libmachine: (addons-489440)     </rng>
	I1205 19:35:14.922512   13818 main.go:141] libmachine: (addons-489440)     
	I1205 19:35:14.922619   13818 main.go:141] libmachine: (addons-489440)     
	I1205 19:35:14.922645   13818 main.go:141] libmachine: (addons-489440)   </devices>
	I1205 19:35:14.922660   13818 main.go:141] libmachine: (addons-489440) </domain>
	I1205 19:35:14.922677   13818 main.go:141] libmachine: (addons-489440) 
	I1205 19:35:14.928410   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:97:db:62 in network default
	I1205 19:35:14.928993   13818 main.go:141] libmachine: (addons-489440) Ensuring networks are active...
	I1205 19:35:14.929024   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:14.929743   13818 main.go:141] libmachine: (addons-489440) Ensuring network default is active
	I1205 19:35:14.930115   13818 main.go:141] libmachine: (addons-489440) Ensuring network mk-addons-489440 is active
	I1205 19:35:14.930659   13818 main.go:141] libmachine: (addons-489440) Getting domain xml...
	I1205 19:35:14.931474   13818 main.go:141] libmachine: (addons-489440) Creating domain...
	I1205 19:35:16.395798   13818 main.go:141] libmachine: (addons-489440) Waiting to get IP...
	I1205 19:35:16.396657   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:16.397119   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:16.397141   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:16.397093   13840 retry.go:31] will retry after 264.881612ms: waiting for machine to come up
	I1205 19:35:16.663810   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:16.664267   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:16.664290   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:16.664224   13840 retry.go:31] will retry after 237.966873ms: waiting for machine to come up
	I1205 19:35:16.903971   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:16.904567   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:16.904600   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:16.904505   13840 retry.go:31] will retry after 365.814567ms: waiting for machine to come up
	I1205 19:35:17.272180   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:17.272685   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:17.272714   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:17.272642   13840 retry.go:31] will retry after 609.794264ms: waiting for machine to come up
	I1205 19:35:17.884599   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:17.885068   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:17.885091   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:17.885032   13840 retry.go:31] will retry after 503.152832ms: waiting for machine to come up
	I1205 19:35:18.389634   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:18.390035   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:18.390058   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:18.389986   13840 retry.go:31] will retry after 692.863454ms: waiting for machine to come up
	I1205 19:35:19.085146   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:19.085648   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:19.085669   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:19.085597   13840 retry.go:31] will retry after 833.550331ms: waiting for machine to come up
	I1205 19:35:19.920316   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:19.920845   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:19.920875   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:19.920778   13840 retry.go:31] will retry after 1.156757357s: waiting for machine to come up
	I1205 19:35:21.079096   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:21.079560   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:21.079598   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:21.079519   13840 retry.go:31] will retry after 1.491242494s: waiting for machine to come up
	I1205 19:35:22.573348   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:22.573837   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:22.573910   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:22.573826   13840 retry.go:31] will retry after 1.895533579s: waiting for machine to come up
	I1205 19:35:24.470986   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:24.471498   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:24.471533   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:24.471442   13840 retry.go:31] will retry after 2.736768173s: waiting for machine to come up
	I1205 19:35:27.209396   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:27.209937   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:27.209962   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:27.209872   13840 retry.go:31] will retry after 3.057692651s: waiting for machine to come up
	I1205 19:35:30.269596   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:30.270124   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:30.270159   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:30.270101   13840 retry.go:31] will retry after 4.032017669s: waiting for machine to come up
	I1205 19:35:34.305239   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:34.305672   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find current IP address of domain addons-489440 in network mk-addons-489440
	I1205 19:35:34.305696   13818 main.go:141] libmachine: (addons-489440) DBG | I1205 19:35:34.305634   13840 retry.go:31] will retry after 3.851038931s: waiting for machine to come up
	I1205 19:35:38.161676   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.162028   13818 main.go:141] libmachine: (addons-489440) Found IP for machine: 192.168.39.118
	I1205 19:35:38.162055   13818 main.go:141] libmachine: (addons-489440) Reserving static IP address...
	I1205 19:35:38.162066   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has current primary IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.162435   13818 main.go:141] libmachine: (addons-489440) DBG | unable to find host DHCP lease matching {name: "addons-489440", mac: "52:54:00:e7:05:ac", ip: "192.168.39.118"} in network mk-addons-489440
	I1205 19:35:38.234192   13818 main.go:141] libmachine: (addons-489440) DBG | Getting to WaitForSSH function...
	I1205 19:35:38.234222   13818 main.go:141] libmachine: (addons-489440) Reserved static IP address: 192.168.39.118
	I1205 19:35:38.234235   13818 main.go:141] libmachine: (addons-489440) Waiting for SSH to be available...
	I1205 19:35:38.236866   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.237320   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.237351   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.237547   13818 main.go:141] libmachine: (addons-489440) DBG | Using SSH client type: external
	I1205 19:35:38.237586   13818 main.go:141] libmachine: (addons-489440) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa (-rw-------)
	I1205 19:35:38.237628   13818 main.go:141] libmachine: (addons-489440) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:35:38.237642   13818 main.go:141] libmachine: (addons-489440) DBG | About to run SSH command:
	I1205 19:35:38.237656   13818 main.go:141] libmachine: (addons-489440) DBG | exit 0
	I1205 19:35:38.342519   13818 main.go:141] libmachine: (addons-489440) DBG | SSH cmd err, output: <nil>: 
	I1205 19:35:38.342749   13818 main.go:141] libmachine: (addons-489440) KVM machine creation complete!
	I1205 19:35:38.343095   13818 main.go:141] libmachine: (addons-489440) Calling .GetConfigRaw
	I1205 19:35:38.343738   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:38.343947   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:38.344093   13818 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:35:38.344109   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:35:38.345245   13818 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:35:38.345259   13818 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:35:38.345266   13818 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:35:38.345274   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.347322   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.347645   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.347675   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.347791   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.347948   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.348072   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.348220   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.348373   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.348693   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.348705   13818 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:35:38.477448   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:38.477471   13818 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:35:38.477478   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.480150   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.480491   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.480517   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.480693   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.480896   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.481059   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.481231   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.481395   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.481716   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.481728   13818 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:35:38.611268   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1205 19:35:38.611340   13818 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:35:38.611347   13818 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:35:38.611355   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:38.611582   13818 buildroot.go:166] provisioning hostname "addons-489440"
	I1205 19:35:38.611608   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:38.611755   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.614217   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.614556   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.614578   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.614744   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.614917   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.615057   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.615191   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.615360   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.615658   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.615671   13818 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-489440 && echo "addons-489440" | sudo tee /etc/hostname
	I1205 19:35:38.760205   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-489440
	
	I1205 19:35:38.760229   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.762695   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.763022   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.763053   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.763188   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:38.763386   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.763584   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:38.763724   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:38.763909   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:38.764257   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:38.764274   13818 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-489440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-489440/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-489440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:35:38.902054   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:35:38.902082   13818 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 19:35:38.902097   13818 buildroot.go:174] setting up certificates
	I1205 19:35:38.902107   13818 provision.go:83] configureAuth start
	I1205 19:35:38.902115   13818 main.go:141] libmachine: (addons-489440) Calling .GetMachineName
	I1205 19:35:38.902409   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:38.904824   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.905112   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.905149   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.905327   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:38.907463   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.907774   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:38.907798   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:38.907957   13818 provision.go:138] copyHostCerts
	I1205 19:35:38.908027   13818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 19:35:38.908197   13818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 19:35:38.908310   13818 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 19:35:38.908427   13818 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.addons-489440 san=[192.168.39.118 192.168.39.118 localhost 127.0.0.1 minikube addons-489440]
	I1205 19:35:39.128128   13818 provision.go:172] copyRemoteCerts
	I1205 19:35:39.128212   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:35:39.128237   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.130499   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.130773   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.130802   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.130906   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.131078   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.131244   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.131384   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.228107   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:35:39.250519   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 19:35:39.272663   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 19:35:39.294462   13818 provision.go:86] duration metric: configureAuth took 392.344209ms
	I1205 19:35:39.294487   13818 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:35:39.294665   13818 config.go:182] Loaded profile config "addons-489440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:35:39.294751   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.297323   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.297632   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.297659   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.297889   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.298086   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.298247   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.298408   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.298606   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:39.298961   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:39.298977   13818 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:35:39.633812   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:35:39.633841   13818 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:35:39.633877   13818 main.go:141] libmachine: (addons-489440) Calling .GetURL
	I1205 19:35:39.635073   13818 main.go:141] libmachine: (addons-489440) DBG | Using libvirt version 6000000
	I1205 19:35:39.637564   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.637930   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.637968   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.638095   13818 main.go:141] libmachine: Docker is up and running!
	I1205 19:35:39.638112   13818 main.go:141] libmachine: Reticulating splines...
	I1205 19:35:39.638120   13818 client.go:171] LocalClient.Create took 25.39193102s
	I1205 19:35:39.638141   13818 start.go:167] duration metric: libmachine.API.Create for "addons-489440" took 25.391993142s
	I1205 19:35:39.638154   13818 start.go:300] post-start starting for "addons-489440" (driver="kvm2")
	I1205 19:35:39.638166   13818 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:35:39.638188   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.638458   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:35:39.638490   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.640792   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.641135   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.641166   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.641310   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.641491   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.641642   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.641767   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.735852   13818 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:35:39.740122   13818 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 19:35:39.740142   13818 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 19:35:39.740197   13818 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 19:35:39.740217   13818 start.go:303] post-start completed in 102.057615ms
	I1205 19:35:39.740246   13818 main.go:141] libmachine: (addons-489440) Calling .GetConfigRaw
	I1205 19:35:39.740860   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:39.743930   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.744237   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.744267   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.744435   13818 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/config.json ...
	I1205 19:35:39.744590   13818 start.go:128] duration metric: createHost completed in 25.516504912s
	I1205 19:35:39.744608   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.746401   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.746791   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.746818   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.746921   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.747087   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.747249   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.747403   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.747545   13818 main.go:141] libmachine: Using SSH client type: native
	I1205 19:35:39.747904   13818 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I1205 19:35:39.747917   13818 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 19:35:39.879192   13818 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701804939.862103465
	
	I1205 19:35:39.879218   13818 fix.go:206] guest clock: 1701804939.862103465
	I1205 19:35:39.879228   13818 fix.go:219] Guest: 2023-12-05 19:35:39.862103465 +0000 UTC Remote: 2023-12-05 19:35:39.744599227 +0000 UTC m=+25.630995544 (delta=117.504238ms)
	I1205 19:35:39.879280   13818 fix.go:190] guest clock delta is within tolerance: 117.504238ms
	I1205 19:35:39.879287   13818 start.go:83] releasing machines lock for "addons-489440", held for 25.651287508s
	I1205 19:35:39.879321   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.879550   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:39.881938   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.882230   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.882258   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.882392   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.882915   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.883078   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:35:39.883163   13818 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:35:39.883212   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.883334   13818 ssh_runner.go:195] Run: cat /version.json
	I1205 19:35:39.883361   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:35:39.885833   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886119   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.886150   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886205   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886363   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.886532   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.886564   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:39.886587   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:39.886756   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.886857   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:35:39.886930   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.887002   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:35:39.887192   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:35:39.887352   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:35:39.975566   13818 ssh_runner.go:195] Run: systemctl --version
	I1205 19:35:40.034134   13818 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:35:40.194302   13818 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:35:40.200537   13818 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:35:40.200624   13818 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:35:40.214601   13818 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:35:40.214621   13818 start.go:475] detecting cgroup driver to use...
	I1205 19:35:40.214682   13818 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:35:40.227363   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:35:40.239156   13818 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:35:40.239208   13818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:35:40.251130   13818 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:35:40.263074   13818 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:35:40.364842   13818 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:35:40.491312   13818 docker.go:219] disabling docker service ...
	I1205 19:35:40.491378   13818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:35:40.503958   13818 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:35:40.515603   13818 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:35:40.614310   13818 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:35:40.722603   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:35:40.735155   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:35:40.752438   13818 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 19:35:40.752502   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.762547   13818 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:35:40.762641   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.772533   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.782168   13818 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:35:40.791508   13818 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:35:40.801427   13818 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:35:40.810095   13818 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:35:40.810151   13818 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:35:40.823249   13818 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:35:40.832060   13818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:35:40.954077   13818 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:35:41.122395   13818 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:35:41.122484   13818 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:35:41.131169   13818 start.go:543] Will wait 60s for crictl version
	I1205 19:35:41.131265   13818 ssh_runner.go:195] Run: which crictl
	I1205 19:35:41.135480   13818 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:35:41.173240   13818 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 19:35:41.173342   13818 ssh_runner.go:195] Run: crio --version
	I1205 19:35:41.221589   13818 ssh_runner.go:195] Run: crio --version
	I1205 19:35:41.268383   13818 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 19:35:41.269793   13818 main.go:141] libmachine: (addons-489440) Calling .GetIP
	I1205 19:35:41.272292   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:41.272659   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:35:41.272690   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:35:41.272901   13818 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:35:41.277169   13818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:41.291400   13818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:35:41.291447   13818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:41.334338   13818 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 19:35:41.334415   13818 ssh_runner.go:195] Run: which lz4
	I1205 19:35:41.338889   13818 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 19:35:41.342953   13818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:35:41.342980   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 19:35:42.967336   13818 crio.go:444] Took 1.628478 seconds to copy over tarball
	I1205 19:35:42.967414   13818 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:35:46.370512   13818 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.403064024s)
	I1205 19:35:46.370535   13818 crio.go:451] Took 3.403171 seconds to extract the tarball
	I1205 19:35:46.370544   13818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:35:46.412235   13818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:35:46.483393   13818 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 19:35:46.483418   13818 cache_images.go:84] Images are preloaded, skipping loading
	I1205 19:35:46.483483   13818 ssh_runner.go:195] Run: crio config
	I1205 19:35:46.553207   13818 cni.go:84] Creating CNI manager for ""
	I1205 19:35:46.553229   13818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:46.553248   13818 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:35:46.553274   13818 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-489440 NodeName:addons-489440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 19:35:46.553424   13818 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-489440"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:35:46.553501   13818 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-489440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:35:46.553557   13818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 19:35:46.562980   13818 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:35:46.563047   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:35:46.571647   13818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1205 19:35:46.588177   13818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 19:35:46.604192   13818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1205 19:35:46.620388   13818 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I1205 19:35:46.624349   13818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:35:46.636795   13818 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440 for IP: 192.168.39.118
	I1205 19:35:46.636838   13818 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.636992   13818 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 19:35:46.709565   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt ...
	I1205 19:35:46.709597   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt: {Name:mkd92853ad4ee64ebff4e435b2cc586d9215b621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.709761   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key ...
	I1205 19:35:46.709773   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key: {Name:mk2fb5b04c6af103934aa88af1b87a7b3539dcb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.709840   13818 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 19:35:46.809161   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt ...
	I1205 19:35:46.809190   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt: {Name:mk36bcbd2bcb143bdd57f2b15aecacacbfec2fd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.809361   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key ...
	I1205 19:35:46.809375   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key: {Name:mk5598c61dc87d140bae66e4b9645218cf3cf0b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.809494   13818 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.key
	I1205 19:35:46.809509   13818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt with IP's: []
	I1205 19:35:46.907040   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt ...
	I1205 19:35:46.907070   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: {Name:mk00e3f7f7afbf785ec9d44dafa974020feeae6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.907258   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.key ...
	I1205 19:35:46.907278   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.key: {Name:mka20324584ee4250f8c8033ad479bb3a69812f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:46.907378   13818 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9
	I1205 19:35:46.907397   13818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9 with IP's: [192.168.39.118 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:35:47.111932   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9 ...
	I1205 19:35:47.111965   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9: {Name:mk0dd9b4da1bab9c2a80e4dbfd9329f14ba21be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.112141   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9 ...
	I1205 19:35:47.112159   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9: {Name:mk4f94cb525e3c041d4cb708248e6b593206a768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.112249   13818 certs.go:337] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt.ee260ba9 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt
	I1205 19:35:47.112342   13818 certs.go:341] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key.ee260ba9 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key
	I1205 19:35:47.112393   13818 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key
	I1205 19:35:47.112406   13818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt with IP's: []
	I1205 19:35:47.260538   13818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt ...
	I1205 19:35:47.260573   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt: {Name:mk1b301692f07606254d56653011c58f802595fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.260757   13818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key ...
	I1205 19:35:47.260773   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key: {Name:mk5ca9b7e008662710cedf6525e99b1f35be4b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:35:47.260982   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:35:47.261031   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:35:47.261066   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:35:47.261112   13818 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 19:35:47.261704   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:35:47.287542   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:35:47.316630   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:35:47.341984   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 19:35:47.366320   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:35:47.390383   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:35:47.413463   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:35:47.436130   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:35:47.459541   13818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:35:47.482746   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:35:47.499282   13818 ssh_runner.go:195] Run: openssl version
	I1205 19:35:47.504989   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:35:47.514669   13818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:47.519297   13818 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:47.519357   13818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:35:47.524788   13818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:35:47.534792   13818 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:35:47.539177   13818 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:35:47.539236   13818 kubeadm.go:404] StartCluster: {Name:addons-489440 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-489440 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:47.539321   13818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:35:47.539403   13818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:35:47.576741   13818 cri.go:89] found id: ""
	I1205 19:35:47.576827   13818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:35:47.585956   13818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:35:47.594785   13818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:35:47.604006   13818 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:35:47.604051   13818 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 19:35:47.656547   13818 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 19:35:47.656679   13818 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:35:47.799861   13818 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:35:47.800013   13818 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:35:47.800120   13818 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:35:48.039101   13818 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:35:48.276732   13818 out.go:204]   - Generating certificates and keys ...
	I1205 19:35:48.276834   13818 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:35:48.276919   13818 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:35:48.330217   13818 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:35:48.397989   13818 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:35:48.535133   13818 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:35:48.607491   13818 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:35:48.731145   13818 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:35:48.731323   13818 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-489440 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1205 19:35:48.892685   13818 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:35:48.892894   13818 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-489440 localhost] and IPs [192.168.39.118 127.0.0.1 ::1]
	I1205 19:35:48.960467   13818 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:35:49.009016   13818 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:35:49.166596   13818 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:35:49.166711   13818 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:35:49.222650   13818 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:35:49.446690   13818 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:35:49.653181   13818 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:35:49.712751   13818 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:35:49.713383   13818 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:35:49.715670   13818 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:35:49.717849   13818 out.go:204]   - Booting up control plane ...
	I1205 19:35:49.718008   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:35:49.718134   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:35:49.718246   13818 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:35:49.735826   13818 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:35:49.736644   13818 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:35:49.736777   13818 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:35:49.866371   13818 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:35:57.369286   13818 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504851 seconds
	I1205 19:35:57.369440   13818 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:35:57.396596   13818 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:35:57.931766   13818 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:35:57.932038   13818 kubeadm.go:322] [mark-control-plane] Marking the node addons-489440 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 19:35:58.445481   13818 kubeadm.go:322] [bootstrap-token] Using token: zjs04c.xzfumu8bjkpzqcv2
	I1205 19:35:58.447342   13818 out.go:204]   - Configuring RBAC rules ...
	I1205 19:35:58.447510   13818 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:35:58.452534   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:35:58.460167   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:35:58.470254   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:35:58.474314   13818 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:35:58.478698   13818 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:35:58.500772   13818 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:35:58.781320   13818 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:35:58.902070   13818 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:35:58.902116   13818 kubeadm.go:322] 
	I1205 19:35:58.902202   13818 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:35:58.902216   13818 kubeadm.go:322] 
	I1205 19:35:58.902351   13818 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:35:58.902363   13818 kubeadm.go:322] 
	I1205 19:35:58.902396   13818 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:35:58.902484   13818 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:35:58.902559   13818 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:35:58.902579   13818 kubeadm.go:322] 
	I1205 19:35:58.902666   13818 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 19:35:58.902677   13818 kubeadm.go:322] 
	I1205 19:35:58.902745   13818 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 19:35:58.902755   13818 kubeadm.go:322] 
	I1205 19:35:58.902828   13818 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:35:58.902908   13818 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:35:58.902984   13818 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:35:58.902999   13818 kubeadm.go:322] 
	I1205 19:35:58.903102   13818 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:35:58.903196   13818 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:35:58.903209   13818 kubeadm.go:322] 
	I1205 19:35:58.903318   13818 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zjs04c.xzfumu8bjkpzqcv2 \
	I1205 19:35:58.903468   13818 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 19:35:58.903537   13818 kubeadm.go:322] 	--control-plane 
	I1205 19:35:58.903554   13818 kubeadm.go:322] 
	I1205 19:35:58.903653   13818 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:35:58.903662   13818 kubeadm.go:322] 
	I1205 19:35:58.903760   13818 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zjs04c.xzfumu8bjkpzqcv2 \
	I1205 19:35:58.903885   13818 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 19:35:58.904648   13818 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:35:58.904667   13818 cni.go:84] Creating CNI manager for ""
	I1205 19:35:58.904674   13818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:58.906642   13818 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 19:35:58.908294   13818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 19:35:58.938165   13818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 19:35:59.006939   13818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:35:59.007005   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.007005   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=addons-489440 minikube.k8s.io/updated_at=2023_12_05T19_35_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.276052   13818 ops.go:34] apiserver oom_adj: -16
	I1205 19:35:59.276224   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.372513   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:35:59.963459   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.463150   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:00.963571   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.463883   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:01.962859   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.463491   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:02.963499   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.462934   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:03.962885   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.463799   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:04.963425   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.462931   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:05.963219   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.463441   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:06.963297   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.463098   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:07.963626   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:08.463632   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:08.963922   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:09.463162   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:09.963624   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:10.463115   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:10.963505   13818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:36:11.113708   13818 kubeadm.go:1088] duration metric: took 12.106755631s to wait for elevateKubeSystemPrivileges.
	I1205 19:36:11.113742   13818 kubeadm.go:406] StartCluster complete in 23.574510275s
	I1205 19:36:11.113765   13818 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:11.113887   13818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:36:11.114233   13818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:36:11.114452   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:36:11.114520   13818 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1205 19:36:11.114593   13818 addons.go:69] Setting volumesnapshots=true in profile "addons-489440"
	I1205 19:36:11.114604   13818 addons.go:69] Setting ingress-dns=true in profile "addons-489440"
	I1205 19:36:11.114617   13818 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-489440"
	I1205 19:36:11.114627   13818 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-489440"
	I1205 19:36:11.114629   13818 addons.go:231] Setting addon ingress-dns=true in "addons-489440"
	I1205 19:36:11.114631   13818 addons.go:69] Setting storage-provisioner=true in profile "addons-489440"
	I1205 19:36:11.114644   13818 addons.go:69] Setting inspektor-gadget=true in profile "addons-489440"
	I1205 19:36:11.114657   13818 addons.go:69] Setting registry=true in profile "addons-489440"
	I1205 19:36:11.114663   13818 addons.go:231] Setting addon inspektor-gadget=true in "addons-489440"
	I1205 19:36:11.114667   13818 addons.go:231] Setting addon registry=true in "addons-489440"
	I1205 19:36:11.114679   13818 addons.go:69] Setting default-storageclass=true in profile "addons-489440"
	I1205 19:36:11.114695   13818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-489440"
	I1205 19:36:11.114706   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114715   13818 addons.go:69] Setting gcp-auth=true in profile "addons-489440"
	I1205 19:36:11.114720   13818 config.go:182] Loaded profile config "addons-489440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:11.114737   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114767   13818 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-489440"
	I1205 19:36:11.114765   13818 addons.go:69] Setting cloud-spanner=true in profile "addons-489440"
	I1205 19:36:11.114798   13818 addons.go:231] Setting addon cloud-spanner=true in "addons-489440"
	I1205 19:36:11.114637   13818 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-489440"
	I1205 19:36:11.114708   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114847   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114659   13818 addons.go:231] Setting addon storage-provisioner=true in "addons-489440"
	I1205 19:36:11.114940   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115159   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115168   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115187   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115159   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115215   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115218   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115234   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.114648   13818 addons.go:69] Setting metrics-server=true in profile "addons-489440"
	I1205 19:36:11.115272   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115278   13818 addons.go:231] Setting addon metrics-server=true in "addons-489440"
	I1205 19:36:11.114742   13818 mustload.go:65] Loading cluster: addons-489440
	I1205 19:36:11.114804   13818 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-489440"
	I1205 19:36:11.114748   13818 addons.go:69] Setting helm-tiller=true in profile "addons-489440"
	I1205 19:36:11.115304   13818 addons.go:231] Setting addon helm-tiller=true in "addons-489440"
	I1205 19:36:11.114759   13818 addons.go:69] Setting ingress=true in profile "addons-489440"
	I1205 19:36:11.115317   13818 addons.go:231] Setting addon ingress=true in "addons-489440"
	I1205 19:36:11.114622   13818 addons.go:231] Setting addon volumesnapshots=true in "addons-489440"
	I1205 19:36:11.115321   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.114837   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115357   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115370   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115395   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115444   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.114640   13818 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-489440"
	I1205 19:36:11.115256   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.115758   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115781   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.115886   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115911   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116088   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.115763   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116119   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116140   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116193   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116222   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116418   13818 config.go:182] Loaded profile config "addons-489440": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:36:11.116474   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116506   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116609   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.116636   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.116692   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.134340   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41105
	I1205 19:36:11.134836   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.134937   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I1205 19:36:11.135085   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I1205 19:36:11.135209   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.135516   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.135533   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.135880   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.135962   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.135970   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.135979   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.136482   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.136521   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.136980   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.137086   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.137107   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.137530   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.137570   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.138081   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.138604   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I1205 19:36:11.138639   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I1205 19:36:11.138607   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.138681   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.138984   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.139050   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.139436   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.139452   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.139574   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.139586   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.139759   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.140174   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.140196   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.140294   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.154832   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.154898   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.155035   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.154906   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.155496   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.155541   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.168741   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I1205 19:36:11.169434   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.170462   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.170482   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.170831   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.170915   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43193
	I1205 19:36:11.171299   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.171708   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.171724   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.172024   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.172243   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.173142   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.174875   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.177307   13818 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1205 19:36:11.175339   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.178987   13818 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1205 19:36:11.179008   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1205 19:36:11.179029   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.179250   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I1205 19:36:11.180917   13818 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 19:36:11.182760   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.181098   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1205 19:36:11.181520   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.182207   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.182960   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.184209   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.184221   13818 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1205 19:36:11.185562   13818 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 19:36:11.185578   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1205 19:36:11.185596   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.184236   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.184330   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.184670   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.185756   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.185777   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.185782   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.186249   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.186266   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.186658   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.186857   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.187069   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.187610   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.187647   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.189409   13818 addons.go:231] Setting addon default-storageclass=true in "addons-489440"
	I1205 19:36:11.189451   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.189825   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.189857   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.190071   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.190702   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.190714   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I1205 19:36:11.190724   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.190830   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.191008   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.191170   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.191282   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.192404   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.192919   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.192935   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.193270   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.193560   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.195104   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.197004   13818 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1205 19:36:11.199123   13818 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:11.199141   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 19:36:11.199161   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.198384   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I1205 19:36:11.198396   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I1205 19:36:11.201894   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I1205 19:36:11.202019   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1205 19:36:11.202136   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.202376   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.202781   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.202932   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.202955   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.203033   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I1205 19:36:11.203056   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.203370   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.203471   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.203491   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.203563   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.203579   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.203674   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.203692   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.203944   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I1205 19:36:11.204045   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.204071   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.204089   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.204102   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.204531   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.204555   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.204585   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.204656   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.204785   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.204815   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.204951   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.205099   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.205114   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.205292   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.205488   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.205588   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.205794   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.206097   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.206122   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.206446   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.206463   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.206809   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.207019   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.207151   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.207705   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.207722   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.207791   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
	I1205 19:36:11.208896   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.209147   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.209241   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.211314   13818 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:36:11.210171   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.210703   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.212751   13818 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:11.212764   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:36:11.212783   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.215350   13818 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1205 19:36:11.219614   13818 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1205 19:36:11.219633   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1205 19:36:11.219654   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.215231   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.219720   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.215596   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I1205 19:36:11.216382   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.219868   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.219901   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.216936   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.220736   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.220944   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.221116   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.222059   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.222337   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.222482   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.222967   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.223154   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.223182   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.223366   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.223386   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.223669   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.223735   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.223884   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.224016   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.224143   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.224319   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.224354   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.226517   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I1205 19:36:11.226949   13818 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-489440"
	I1205 19:36:11.227001   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.227004   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.227419   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.227458   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.227492   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.227507   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.227864   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.227971   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.229263   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I1205 19:36:11.229848   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.230050   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.232281   13818 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1205 19:36:11.230534   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.234172   13818 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:11.232039   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37267
	I1205 19:36:11.234186   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 19:36:11.234201   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.232313   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.232754   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I1205 19:36:11.234804   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.234818   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.234880   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.235103   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.235421   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.235440   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.235634   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I1205 19:36:11.235815   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.236004   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.236345   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.236359   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.236753   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.236933   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:11.237361   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.237404   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.237622   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.237716   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.237760   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.239222   13818 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1205 19:36:11.238733   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.240651   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.240679   13818 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:11.239663   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.240299   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.240740   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 19:36:11.240760   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.240761   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I1205 19:36:11.240810   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.240831   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.241000   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I1205 19:36:11.241018   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.241044   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.241284   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.241344   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.241343   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.241704   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.242216   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.244006   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1205 19:36:11.244076   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.243133   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.243949   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.242633   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.244604   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.245507   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:11.247081   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:11.245552   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.245569   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.248562   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.249988   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 19:36:11.245881   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.245992   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.247519   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.248713   13818 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:11.251383   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1205 19:36:11.251409   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 19:36:11.251433   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 19:36:11.251458   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.251494   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.251417   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.251690   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.251994   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.252016   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.252209   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.252399   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.252909   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.253355   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I1205 19:36:11.253675   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I1205 19:36:11.253961   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.254012   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.254189   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.254458   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.254472   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.256139   13818 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1205 19:36:11.254877   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.255291   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.256863   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257587   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.257618   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 19:36:11.257587   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.257334   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.257654   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257424   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.256906   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257715   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.257736   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.257632   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 19:36:11.257758   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.258060   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.258067   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.258106   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.258174   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:11.258189   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.258206   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:11.258228   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.258286   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.258388   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.258433   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.260703   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.261011   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.262668   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 19:36:11.261509   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.262711   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.261691   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.262945   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.264165   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 19:36:11.264373   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.265633   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 19:36:11.265812   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.267719   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 19:36:11.269179   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 19:36:11.270431   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I1205 19:36:11.271804   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 19:36:11.270850   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.274266   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 19:36:11.273599   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.275620   13818 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 19:36:11.274218   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I1205 19:36:11.274816   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I1205 19:36:11.275645   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.276859   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 19:36:11.276872   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 19:36:11.276887   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.277402   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.277866   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.277880   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.277979   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:11.278317   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.278339   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.278475   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:11.278492   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:11.278710   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.279018   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.279552   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:11.279743   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:11.280861   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.280910   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.281304   13818 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:11.281317   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:36:11.281332   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.281339   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.281360   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.281496   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:11.281579   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.283355   13818 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 19:36:11.284771   13818 out.go:177]   - Using image docker.io/busybox:stable
	I1205 19:36:11.283644   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.281741   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.284181   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.286294   13818 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:11.284800   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.286312   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 19:36:11.284979   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.284988   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.286329   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.286328   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:11.286492   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.286520   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.286636   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:11.288716   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.289078   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:11.289106   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:11.289273   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:11.289435   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:11.289576   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:11.289692   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	W1205 19:36:11.290502   13818 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41202->192.168.39.118:22: read: connection reset by peer
	I1205 19:36:11.290526   13818 retry.go:31] will retry after 195.705751ms: ssh: handshake failed: read tcp 192.168.39.1:41202->192.168.39.118:22: read: connection reset by peer
	I1205 19:36:11.430282   13818 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 19:36:11.430312   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 19:36:11.502972   13818 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1205 19:36:11.503002   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1205 19:36:11.518037   13818 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:11.518070   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 19:36:11.519990   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 19:36:11.540994   13818 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1205 19:36:11.541021   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1205 19:36:11.567629   13818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:36:11.572676   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 19:36:11.572700   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 19:36:11.578227   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 19:36:11.615151   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 19:36:11.615175   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 19:36:11.615524   13818 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-489440" context rescaled to 1 replicas
	I1205 19:36:11.615565   13818 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:36:11.617528   13818 out.go:177] * Verifying Kubernetes components...
	I1205 19:36:11.618982   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:36:11.637066   13818 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 19:36:11.637096   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 19:36:11.640280   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:36:11.648833   13818 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:11.648858   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1205 19:36:11.669257   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 19:36:11.694305   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 19:36:11.807391   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 19:36:11.813783   13818 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1205 19:36:11.813813   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1205 19:36:11.819832   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:36:11.875339   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 19:36:11.875367   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 19:36:11.941670   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 19:36:11.941695   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 19:36:12.119335   13818 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 19:36:12.119363   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 19:36:12.119585   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1205 19:36:12.136371   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 19:36:12.208221   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 19:36:12.208242   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 19:36:12.234143   13818 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1205 19:36:12.234171   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1205 19:36:12.265811   13818 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:12.265833   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 19:36:12.284092   13818 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 19:36:12.284121   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 19:36:12.415386   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 19:36:12.415411   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 19:36:12.434752   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 19:36:12.442791   13818 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1205 19:36:12.442817   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1205 19:36:12.444322   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 19:36:12.444342   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 19:36:12.513776   13818 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:12.513796   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 19:36:12.519705   13818 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1205 19:36:12.519721   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1205 19:36:12.531041   13818 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 19:36:12.531062   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 19:36:12.600068   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:12.611314   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 19:36:12.611342   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 19:36:12.632056   13818 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 19:36:12.632079   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1205 19:36:12.760040   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 19:36:12.760068   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 19:36:12.761012   13818 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:12.761024   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1205 19:36:12.818113   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 19:36:12.818137   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 19:36:12.822744   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1205 19:36:12.850508   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 19:36:12.850529   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 19:36:12.883590   13818 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:12.883614   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 19:36:12.937198   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 19:36:15.658134   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.138105868s)
	I1205 19:36:15.658197   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:15.658211   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:15.658572   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:15.658635   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:15.658653   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:15.658663   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:15.658675   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:15.658981   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:15.658981   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:15.659012   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:16.979949   13818 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.412271769s)
	I1205 19:36:16.979981   13818 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:36:17.785590   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.207332815s)
	I1205 19:36:17.785643   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:17.785648   13818 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.16663415s)
	I1205 19:36:17.785656   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:17.785983   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:17.786001   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:17.786012   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:17.786014   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:17.786020   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:17.786290   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:17.786307   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:17.786633   13818 node_ready.go:35] waiting up to 6m0s for node "addons-489440" to be "Ready" ...
	I1205 19:36:17.954342   13818 node_ready.go:49] node "addons-489440" has status "Ready":"True"
	I1205 19:36:17.954366   13818 node_ready.go:38] duration metric: took 167.702127ms waiting for node "addons-489440" to be "Ready" ...
	I1205 19:36:17.954375   13818 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:36:18.128632   13818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:18.610551   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 19:36:18.610598   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:18.613934   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:18.614515   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:18.614552   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:18.614774   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:18.615006   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:18.615180   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:18.615355   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:18.716708   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.07638677s)
	I1205 19:36:18.716767   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:18.716780   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:18.717095   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:18.717127   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:18.717144   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:18.717163   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:18.717497   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:18.717512   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:18.717520   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:18.972267   13818 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 19:36:19.014804   13818 addons.go:231] Setting addon gcp-auth=true in "addons-489440"
	I1205 19:36:19.014870   13818 host.go:66] Checking if "addons-489440" exists ...
	I1205 19:36:19.015189   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:19.015216   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:19.029508   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38187
	I1205 19:36:19.029938   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:19.030366   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:19.030389   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:19.030708   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:19.031254   13818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:36:19.031285   13818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:36:19.045761   13818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43887
	I1205 19:36:19.046237   13818 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:36:19.046696   13818 main.go:141] libmachine: Using API Version  1
	I1205 19:36:19.046723   13818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:36:19.047041   13818 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:36:19.047232   13818 main.go:141] libmachine: (addons-489440) Calling .GetState
	I1205 19:36:19.048877   13818 main.go:141] libmachine: (addons-489440) Calling .DriverName
	I1205 19:36:19.049091   13818 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 19:36:19.049113   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHHostname
	I1205 19:36:19.051789   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:19.052244   13818 main.go:141] libmachine: (addons-489440) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:05:ac", ip: ""} in network mk-addons-489440: {Iface:virbr1 ExpiryTime:2023-12-05 20:35:30 +0000 UTC Type:0 Mac:52:54:00:e7:05:ac Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:addons-489440 Clientid:01:52:54:00:e7:05:ac}
	I1205 19:36:19.052265   13818 main.go:141] libmachine: (addons-489440) DBG | domain addons-489440 has defined IP address 192.168.39.118 and MAC address 52:54:00:e7:05:ac in network mk-addons-489440
	I1205 19:36:19.052428   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHPort
	I1205 19:36:19.052606   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHKeyPath
	I1205 19:36:19.052782   13818 main.go:141] libmachine: (addons-489440) Calling .GetSSHUsername
	I1205 19:36:19.052949   13818 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/addons-489440/id_rsa Username:docker}
	I1205 19:36:20.417049   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:20.420728   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.751435639s)
	I1205 19:36:20.420776   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420777   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.726435776s)
	I1205 19:36:20.420789   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420816   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420831   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420829   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.613404053s)
	I1205 19:36:20.420863   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420873   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.60101463s)
	I1205 19:36:20.420883   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420889   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420898   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.420942   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.30133276s)
	I1205 19:36:20.420971   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.420993   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.284598573s)
	I1205 19:36:20.421010   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.421018   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.421040   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.421118   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.986337529s)
	I1205 19:36:20.421135   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.421144   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.421269   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.821170644s)
	W1205 19:36:20.421295   13818 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:20.421311   13818 retry.go:31] will retry after 207.414297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 19:36:20.421407   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.598634254s)
	I1205 19:36:20.421438   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.421450   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.422947   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.422944   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.422962   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.422973   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.422983   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.422987   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.422996   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423004   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423012   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423032   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423055   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423063   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423073   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423077   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423081   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423103   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423111   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423119   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423124   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423126   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423141   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423162   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423171   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423179   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423187   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423235   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423244   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423252   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423260   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423292   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423304   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423313   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423321   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423364   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423385   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423393   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423401   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.423408   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.423429   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423446   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423461   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.423473   13818 addons.go:467] Verifying addon registry=true in "addons-489440"
	I1205 19:36:20.423511   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.425385   13818 out.go:177] * Verifying registry addon...
	I1205 19:36:20.423571   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423631   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423658   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423812   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.423848   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423880   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.423899   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.424083   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.424107   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.424268   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.424292   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.424402   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.424423   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.426943   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.426955   13818 addons.go:467] Verifying addon metrics-server=true in "addons-489440"
	I1205 19:36:20.426987   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.426987   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.426999   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427054   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427097   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427216   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.427227   13818 addons.go:467] Verifying addon ingress=true in "addons-489440"
	I1205 19:36:20.429765   13818 out.go:177] * Verifying ingress addon...
	I1205 19:36:20.427804   13818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 19:36:20.432368   13818 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 19:36:20.451632   13818 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 19:36:20.451649   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.454665   13818 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 19:36:20.454688   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.462445   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.462465   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.462596   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:20.462613   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:20.462703   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:20.462705   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.462721   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.462922   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:20.462961   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:20.462971   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	W1205 19:36:20.463048   13818 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 19:36:20.467123   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:20.467314   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:20.629280   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 19:36:21.026747   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.090407   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.386480   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.449222303s)
	I1205 19:36:21.386517   13818 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.337402358s)
	I1205 19:36:21.386535   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:21.386549   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:21.388210   13818 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1205 19:36:21.386925   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:21.386927   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:21.390794   13818 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1205 19:36:21.389486   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:21.392245   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:21.392254   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:21.392285   13818 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 19:36:21.392304   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 19:36:21.392511   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:21.392519   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:21.392527   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:21.392537   13818 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-489440"
	I1205 19:36:21.393814   13818 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 19:36:21.395580   13818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 19:36:21.432194   13818 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 19:36:21.432214   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 19:36:21.447404   13818 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 19:36:21.447425   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.471265   13818 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:21.471285   13818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1205 19:36:21.486063   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:21.508822   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:21.519682   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:21.527582   13818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 19:36:22.019273   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.034091   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.036785   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.476053   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.476268   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.522615   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:22.898057   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:22.974429   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:22.974791   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:22.997727   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.325681   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.696345118s)
	I1205 19:36:23.325753   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.325768   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.326123   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.326142   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.326152   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.326162   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.326501   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.326521   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.326524   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:23.478250   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.478431   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.514933   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:23.638869   13818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.111249736s)
	I1205 19:36:23.638938   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.638950   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.639299   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.639385   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:23.639391   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.639412   13818 main.go:141] libmachine: Making call to close driver server
	I1205 19:36:23.639432   13818 main.go:141] libmachine: (addons-489440) Calling .Close
	I1205 19:36:23.639680   13818 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:36:23.639697   13818 main.go:141] libmachine: (addons-489440) DBG | Closing plugin on server side
	I1205 19:36:23.639701   13818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:36:23.641375   13818 addons.go:467] Verifying addon gcp-auth=true in "addons-489440"
	I1205 19:36:23.644579   13818 out.go:177] * Verifying gcp-auth addon...
	I1205 19:36:23.647144   13818 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 19:36:23.673386   13818 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 19:36:23.673408   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.702009   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:23.992963   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:23.996229   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:23.996580   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.210392   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.473593   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.474469   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.492846   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:24.706137   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:24.903048   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:24.975569   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:24.975634   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:24.993145   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.208328   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.477621   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.477636   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.501485   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:25.706046   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:25.973987   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:25.974311   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:25.991814   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.206317   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.473139   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:26.473208   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.491661   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:26.711054   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:26.974238   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:26.978558   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.000512   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.210421   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.410424   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:27.475972   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.475979   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:27.513885   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:27.710042   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:27.979207   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:27.981989   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.005884   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.205985   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.476140   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.476323   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.492941   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:28.709045   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:28.976745   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:28.977208   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:28.995844   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.209633   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.473065   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.473322   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.498338   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:29.706755   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:29.894308   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:29.974774   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:29.975543   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:29.992504   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.206493   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.473925   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.475920   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.492019   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:30.709621   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:30.974822   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:30.975678   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:30.991541   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.218830   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.473697   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.476046   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.504739   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:31.727257   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:31.910662   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:31.972503   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:31.974086   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:31.992424   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.216493   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.473674   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.474979   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.495685   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:32.712811   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:32.973519   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:32.975173   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:32.998746   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.208117   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.473221   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.474971   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:33.492059   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:33.706663   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:33.978376   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:33.983887   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.013021   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.219287   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:34.394314   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:34.475450   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:34.488699   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:34.493959   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:34.706663   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.006370   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.007703   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.008617   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.209000   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.476074   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:35.477586   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:35.498281   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:35.708621   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:35.992153   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.005297   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.005802   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.210716   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.407290   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:36.477007   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.481687   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.491645   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:36.710528   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:36.976114   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:36.987392   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:36.994029   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.206590   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.477949   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.482544   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.497525   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:37.708763   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:37.972671   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:37.973893   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:37.992095   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.207251   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.472751   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.475426   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.490677   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:38.707569   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:38.910467   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:38.977814   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:38.984009   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:38.994665   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.207243   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.696664   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.698278   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.699749   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:39.725705   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:39.972280   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:39.973645   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:39.991720   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.206813   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.472215   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.473053   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.492837   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:40.706980   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:40.973451   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:40.973783   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:40.992145   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.214931   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.399348   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:41.472167   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.472979   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.493176   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:41.715364   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:41.974852   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:41.975002   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:41.995292   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.206226   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.473373   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.476151   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.491754   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:42.707585   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:42.974319   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:42.975312   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:42.993263   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.206855   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.473489   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.473862   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.491801   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:43.706237   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:43.894758   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:43.973478   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:43.973881   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:43.992204   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.205737   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.473700   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.474690   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.507158   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:44.705938   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:44.973218   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:44.973685   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:44.992273   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.206543   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.473412   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.474963   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.492253   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:45.706867   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:45.909112   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:45.972582   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:45.976160   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:45.992634   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.206248   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.474205   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:46.474955   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.493821   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:46.706792   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:46.982246   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:46.982262   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.001334   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.206219   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.477057   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:47.478934   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.495774   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:47.706604   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:47.974740   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:47.975385   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.003210   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.206247   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.396790   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:48.473093   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.473780   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.492019   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:48.709158   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:48.975217   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:48.975893   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:48.992351   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.207401   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.475366   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.475656   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.493136   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:49.707554   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:49.979796   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:49.979980   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:49.993565   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.206260   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.574052   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:50.574301   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.578573   13818 pod_ready.go:102] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:50.579776   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.707873   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:50.973349   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:50.973456   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:50.992778   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.205672   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.474649   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.474963   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.494496   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:51.706958   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:51.974058   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:51.975825   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:51.992393   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.207231   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.396466   13818 pod_ready.go:92] pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.396488   13818 pod_ready.go:81] duration metric: took 34.26783129s waiting for pod "coredns-5dd5756b68-bs76k" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.396497   13818 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.399934   13818 pod_ready.go:97] error getting pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-tqsg5" not found
	I1205 19:36:52.399958   13818 pod_ready.go:81] duration metric: took 3.453349ms waiting for pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace to be "Ready" ...
	E1205 19:36:52.399967   13818 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-tqsg5" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-tqsg5" not found
	I1205 19:36:52.399973   13818 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.420585   13818 pod_ready.go:92] pod "etcd-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.420618   13818 pod_ready.go:81] duration metric: took 20.637892ms waiting for pod "etcd-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.420646   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.447174   13818 pod_ready.go:92] pod "kube-apiserver-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.447202   13818 pod_ready.go:81] duration metric: took 26.548818ms waiting for pod "kube-apiserver-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.447215   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.479029   13818 pod_ready.go:92] pod "kube-controller-manager-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.479059   13818 pod_ready.go:81] duration metric: took 31.834453ms waiting for pod "kube-controller-manager-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.479075   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-69z6s" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.486100   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.504145   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.510173   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.595549   13818 pod_ready.go:92] pod "kube-proxy-69z6s" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.595581   13818 pod_ready.go:81] duration metric: took 116.498377ms waiting for pod "kube-proxy-69z6s" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.595596   13818 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.706327   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:52.973678   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:52.978442   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:52.992723   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:52.996057   13818 pod_ready.go:92] pod "kube-scheduler-addons-489440" in "kube-system" namespace has status "Ready":"True"
	I1205 19:36:52.996083   13818 pod_ready.go:81] duration metric: took 400.479344ms waiting for pod "kube-scheduler-addons-489440" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:52.996096   13818 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace to be "Ready" ...
	I1205 19:36:53.208007   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.474738   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.475431   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.491837   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:53.707130   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:53.973316   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:53.974814   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:53.991646   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.207334   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.474808   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.475221   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.494927   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:54.707519   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:54.972619   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:54.974162   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:54.993847   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.206401   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.301105   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:55.473160   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.476602   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.491848   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:55.706579   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:55.976093   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:55.976767   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:55.991941   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.206226   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.474938   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:56.475837   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.491682   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:56.706320   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:56.975618   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:56.978362   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.000866   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.205759   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.302861   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:57.482807   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.483041   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.497534   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:57.710685   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:57.975959   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:57.977510   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:57.994943   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.206461   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:58.477585   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:58.479583   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:58.494658   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:58.706595   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.143101   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.143611   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.149315   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.218911   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.329023   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:36:59.475858   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.476009   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.500144   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:36:59.705918   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:36:59.972342   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:36:59.973111   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:36:59.992065   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.209140   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.474233   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.476042   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.491732   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:00.705816   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:00.974241   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:00.975611   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:00.993934   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.207798   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.472946   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.474964   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.492969   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:01.706796   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:01.801471   13818 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"False"
	I1205 19:37:01.972357   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:01.973832   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:01.991800   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.208137   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.473530   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.477987   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.491889   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:02.718351   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:02.974443   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:02.976247   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:02.991251   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.207239   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.300090   13818 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace has status "Ready":"True"
	I1205 19:37:03.300112   13818 pod_ready.go:81] duration metric: took 10.304006915s waiting for pod "nvidia-device-plugin-daemonset-jw4c2" in "kube-system" namespace to be "Ready" ...
	I1205 19:37:03.300124   13818 pod_ready.go:38] duration metric: took 45.345740112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:37:03.300140   13818 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:37:03.300187   13818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:37:03.365095   13818 api_server.go:72] duration metric: took 51.74949574s to wait for apiserver process to appear ...
	I1205 19:37:03.365117   13818 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:37:03.365132   13818 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I1205 19:37:03.370236   13818 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I1205 19:37:03.371688   13818 api_server.go:141] control plane version: v1.28.4
	I1205 19:37:03.371708   13818 api_server.go:131] duration metric: took 6.583537ms to wait for apiserver health ...
	I1205 19:37:03.371717   13818 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:37:03.386855   13818 system_pods.go:59] 18 kube-system pods found
	I1205 19:37:03.386882   13818 system_pods.go:61] "coredns-5dd5756b68-bs76k" [9235bef7-f927-40da-967d-19ee49cafa9d] Running
	I1205 19:37:03.386886   13818 system_pods.go:61] "csi-hostpath-attacher-0" [748bd69f-a0cf-49f5-8001-0ed8a15a1143] Running
	I1205 19:37:03.386890   13818 system_pods.go:61] "csi-hostpath-resizer-0" [9bfc6e31-08e6-418a-b32f-38d30424a77b] Running
	I1205 19:37:03.386897   13818 system_pods.go:61] "csi-hostpathplugin-hv64h" [e20670c4-f6aa-45f8-9821-3fd6c17ef864] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:03.386902   13818 system_pods.go:61] "etcd-addons-489440" [27f493cc-d5c2-4b03-95ad-dbcf16ed1e74] Running
	I1205 19:37:03.386908   13818 system_pods.go:61] "kube-apiserver-addons-489440" [676e77c5-9038-49a6-9987-652837182816] Running
	I1205 19:37:03.386912   13818 system_pods.go:61] "kube-controller-manager-addons-489440" [16501b70-40f4-4f63-a75f-d7f43d88464e] Running
	I1205 19:37:03.386916   13818 system_pods.go:61] "kube-ingress-dns-minikube" [2c2fd203-d47b-4fe5-bc28-544f23d55a61] Running
	I1205 19:37:03.386920   13818 system_pods.go:61] "kube-proxy-69z6s" [045a74a8-9584-44c6-a651-c58ff036bf8a] Running
	I1205 19:37:03.386925   13818 system_pods.go:61] "kube-scheduler-addons-489440" [495e957d-8044-4e22-8e20-455f2d3c3b96] Running
	I1205 19:37:03.386931   13818 system_pods.go:61] "metrics-server-7c66d45ddc-msjks" [5361bdf5-6fee-48ec-8911-5271ae9055e5] Running
	I1205 19:37:03.386938   13818 system_pods.go:61] "nvidia-device-plugin-daemonset-jw4c2" [2e516e12-3f41-47c1-a610-801efcb32379] Running
	I1205 19:37:03.386948   13818 system_pods.go:61] "registry-2nhwg" [1e708b27-168c-4eae-aebb-7d96da6c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:37:03.386959   13818 system_pods.go:61] "registry-proxy-wnn8h" [2f34e994-0f5a-4ee5-8faa-f0de5de7c04b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:03.386977   13818 system_pods.go:61] "snapshot-controller-58dbcc7b99-g77xf" [0935f346-3928-4760-9a36-10431ed6ce2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:37:03.386982   13818 system_pods.go:61] "snapshot-controller-58dbcc7b99-kkkvt" [37b5c9ae-1c6d-44c6-8c0b-818d39121ceb] Running
	I1205 19:37:03.386986   13818 system_pods.go:61] "storage-provisioner" [f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed] Running
	I1205 19:37:03.386995   13818 system_pods.go:61] "tiller-deploy-7b677967b9-l5vtg" [7e6cc3fe-6001-4c06-a49e-003585210abd] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1205 19:37:03.387000   13818 system_pods.go:74] duration metric: took 15.277981ms to wait for pod list to return data ...
	I1205 19:37:03.387010   13818 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:37:03.392666   13818 default_sa.go:45] found service account: "default"
	I1205 19:37:03.392685   13818 default_sa.go:55] duration metric: took 5.669609ms for default service account to be created ...
	I1205 19:37:03.392694   13818 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:37:03.405703   13818 system_pods.go:86] 18 kube-system pods found
	I1205 19:37:03.405732   13818 system_pods.go:89] "coredns-5dd5756b68-bs76k" [9235bef7-f927-40da-967d-19ee49cafa9d] Running
	I1205 19:37:03.405743   13818 system_pods.go:89] "csi-hostpath-attacher-0" [748bd69f-a0cf-49f5-8001-0ed8a15a1143] Running
	I1205 19:37:03.405749   13818 system_pods.go:89] "csi-hostpath-resizer-0" [9bfc6e31-08e6-418a-b32f-38d30424a77b] Running
	I1205 19:37:03.405761   13818 system_pods.go:89] "csi-hostpathplugin-hv64h" [e20670c4-f6aa-45f8-9821-3fd6c17ef864] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 19:37:03.405769   13818 system_pods.go:89] "etcd-addons-489440" [27f493cc-d5c2-4b03-95ad-dbcf16ed1e74] Running
	I1205 19:37:03.405777   13818 system_pods.go:89] "kube-apiserver-addons-489440" [676e77c5-9038-49a6-9987-652837182816] Running
	I1205 19:37:03.405783   13818 system_pods.go:89] "kube-controller-manager-addons-489440" [16501b70-40f4-4f63-a75f-d7f43d88464e] Running
	I1205 19:37:03.405790   13818 system_pods.go:89] "kube-ingress-dns-minikube" [2c2fd203-d47b-4fe5-bc28-544f23d55a61] Running
	I1205 19:37:03.405798   13818 system_pods.go:89] "kube-proxy-69z6s" [045a74a8-9584-44c6-a651-c58ff036bf8a] Running
	I1205 19:37:03.405805   13818 system_pods.go:89] "kube-scheduler-addons-489440" [495e957d-8044-4e22-8e20-455f2d3c3b96] Running
	I1205 19:37:03.405815   13818 system_pods.go:89] "metrics-server-7c66d45ddc-msjks" [5361bdf5-6fee-48ec-8911-5271ae9055e5] Running
	I1205 19:37:03.405823   13818 system_pods.go:89] "nvidia-device-plugin-daemonset-jw4c2" [2e516e12-3f41-47c1-a610-801efcb32379] Running
	I1205 19:37:03.405837   13818 system_pods.go:89] "registry-2nhwg" [1e708b27-168c-4eae-aebb-7d96da6c9f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 19:37:03.405852   13818 system_pods.go:89] "registry-proxy-wnn8h" [2f34e994-0f5a-4ee5-8faa-f0de5de7c04b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 19:37:03.405868   13818 system_pods.go:89] "snapshot-controller-58dbcc7b99-g77xf" [0935f346-3928-4760-9a36-10431ed6ce2f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 19:37:03.405879   13818 system_pods.go:89] "snapshot-controller-58dbcc7b99-kkkvt" [37b5c9ae-1c6d-44c6-8c0b-818d39121ceb] Running
	I1205 19:37:03.405890   13818 system_pods.go:89] "storage-provisioner" [f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed] Running
	I1205 19:37:03.405903   13818 system_pods.go:89] "tiller-deploy-7b677967b9-l5vtg" [7e6cc3fe-6001-4c06-a49e-003585210abd] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1205 19:37:03.405912   13818 system_pods.go:126] duration metric: took 13.212255ms to wait for k8s-apps to be running ...
	I1205 19:37:03.405925   13818 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:37:03.405975   13818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:37:03.447688   13818 system_svc.go:56] duration metric: took 41.757926ms WaitForService to wait for kubelet.
	I1205 19:37:03.447712   13818 kubeadm.go:581] duration metric: took 51.832119007s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:37:03.447736   13818 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:37:03.454072   13818 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 19:37:03.454103   13818 node_conditions.go:123] node cpu capacity is 2
	I1205 19:37:03.454120   13818 node_conditions.go:105] duration metric: took 6.378633ms to run NodePressure ...
	I1205 19:37:03.454133   13818 start.go:228] waiting for startup goroutines ...
	I1205 19:37:03.472233   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.472790   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.492104   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:03.706247   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:03.975395   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:03.975950   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:03.994200   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.214103   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.476643   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.477035   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.493082   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:04.706252   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:04.975134   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:04.976372   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:04.992519   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.209054   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.476116   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:05.477258   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.492924   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:05.706027   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:05.973875   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:05.974059   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.000195   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.205739   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.473655   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.475662   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:06.492738   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:06.706372   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:06.998209   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:06.998968   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.008334   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.206210   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:07.473706   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:07.475377   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:07.497417   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:07.706090   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.374744   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.375016   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.375054   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.376336   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.476301   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.477893   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.497239   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:08.706444   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:08.972264   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:08.972986   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:08.992565   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.208461   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.474353   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.476001   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.493131   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:09.706470   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:09.973150   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 19:37:09.973175   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:09.992349   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.206538   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.479252   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.479606   13818 kapi.go:107] duration metric: took 50.051802769s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 19:37:10.491981   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:10.709079   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:10.972866   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:10.997022   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.210399   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.472928   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.492466   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:11.711576   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:11.973043   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:11.993453   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:12.206624   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:12.499951   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:12.520408   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.153161   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.154041   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.159028   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.205922   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.473064   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.492152   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:13.706354   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:13.973305   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:13.992210   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.206366   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.474106   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.501900   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:14.707478   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:14.971560   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:14.992677   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.206300   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.476049   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.493136   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:15.706634   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:15.972485   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:15.992647   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.206608   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.475155   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:16.523574   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:16.707854   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:16.974459   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.008384   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.207627   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.481192   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.501148   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:17.706344   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:17.972480   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:17.992454   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.206915   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.473772   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.501051   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:18.706411   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:18.972549   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:18.992819   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.206249   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.472680   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:19.492926   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:19.706447   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:19.975602   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.017416   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.430355   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.473298   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.492828   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:20.705760   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:20.972868   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:20.991833   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.206235   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.473281   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.493007   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:21.707028   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:21.972966   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:21.994051   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.210080   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.473014   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.493095   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:22.707907   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:22.972251   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:22.991767   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.205344   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.473126   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.493889   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:23.709247   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:23.975034   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:23.992020   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.206804   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.478262   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.492558   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:24.710245   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:24.974364   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:24.996955   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.207354   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.477207   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.491711   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:25.705560   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:25.972443   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:25.992180   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.209170   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.475915   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.492489   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:26.707767   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:26.978232   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:26.995122   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.206080   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.480068   13818 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 19:37:27.495360   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:27.706993   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:27.972909   13818 kapi.go:107] duration metric: took 1m7.540535763s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 19:37:27.992688   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.206066   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.492449   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:28.706113   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:28.993076   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.207170   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.494608   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:29.706246   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:29.992405   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.208520   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.494521   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:30.711686   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:30.993715   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.207327   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.495321   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:31.706656   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:31.993097   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.206669   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 19:37:32.493477   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:32.718640   13818 kapi.go:107] duration metric: took 1m9.071495911s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 19:37:32.720356   13818 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-489440 cluster.
	I1205 19:37:32.721822   13818 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 19:37:32.723220   13818 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 19:37:32.992522   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.494196   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:33.991939   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.492453   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:34.991488   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.493712   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:35.993561   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.492308   13818 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 19:37:36.993127   13818 kapi.go:107] duration metric: took 1m15.597543637s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 19:37:36.995260   13818 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, cloud-spanner, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1205 19:37:36.996893   13818 addons.go:502] enable addons completed in 1m25.882369128s: enabled=[nvidia-device-plugin ingress-dns storage-provisioner metrics-server helm-tiller inspektor-gadget cloud-spanner default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1205 19:37:36.996933   13818 start.go:233] waiting for cluster config update ...
	I1205 19:37:36.996952   13818 start.go:242] writing updated cluster config ...
	I1205 19:37:36.997202   13818 ssh_runner.go:195] Run: rm -f paused
	I1205 19:37:37.047987   13818 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 19:37:37.049742   13818 out.go:177] * Done! kubectl is now configured to use "addons-489440" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 19:35:27 UTC, ends at Tue 2023-12-05 19:45:48 UTC. --
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.337936902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805548337912947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=84e18ac2-a40c-4cf7-be0b-c74f7dc3ef06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.338588741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8537ece1-c793-4fa5-8dc0-fa7296e4c4e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.338639558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8537ece1-c793-4fa5-8dc0-fa7296e4c4e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.339065898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e3
99310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee4
5d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINE
R_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969
ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:
7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065
d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900b
cc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8537ece1-c793-4fa5-8dc0-fa7296e4c4e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.379941002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2dfea54d-88cd-4498-8cb1-27e9991ff7b4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.379999063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2dfea54d-88cd-4498-8cb1-27e9991ff7b4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.381077764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=88a80271-a04d-4054-9919-c9c8883449ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.382364322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805548382345442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=88a80271-a04d-4054-9919-c9c8883449ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.383480545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=75fe3988-2c6c-4f46-a236-441a678eb7bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.383534572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=75fe3988-2c6c-4f46-a236-441a678eb7bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.383898131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e3
99310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee4
5d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINE
R_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969
ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:
7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065
d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900b
cc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=75fe3988-2c6c-4f46-a236-441a678eb7bd name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.419581773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8df7199b-790d-4af3-a606-9c7aa16e9832 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.419673825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8df7199b-790d-4af3-a606-9c7aa16e9832 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.421455856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a37ce6a4-9d60-4429-8b8e-a2a0ede8de2d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.423435845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805548423415961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=a37ce6a4-9d60-4429-8b8e-a2a0ede8de2d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.424367352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f6054e46-cb09-40eb-a00f-21d4bbc62348 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.424421557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f6054e46-cb09-40eb-a00f-21d4bbc62348 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.424797841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e3
99310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee4
5d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINE
R_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969
ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:
7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065
d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900b
cc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f6054e46-cb09-40eb-a00f-21d4bbc62348 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.465016017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c04d0835-1880-4e63-ae35-af70ea833d7f name=/runtime.v1.RuntimeService/Version
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.465073777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c04d0835-1880-4e63-ae35-af70ea833d7f name=/runtime.v1.RuntimeService/Version
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.467420485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=693b92c3-e3ac-4f7f-8ec6-79a227410bd7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.469558415Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701805548469539277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:535638,},InodesUsed:&UInt64Value{Value:222,},},},}" file="go-grpc-middleware/chain.go:25" id=693b92c3-e3ac-4f7f-8ec6-79a227410bd7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.474160139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=23886876-bc54-41a5-a3a5-149dc18a8c9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.474243226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=23886876-bc54-41a5-a3a5-149dc18a8c9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:45:48 addons-489440 crio[714]: time="2023-12-05 19:45:48.474570946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ae1985394f3bbd437c0c2d21f8dbae1b4f714694350182d5c340a07e27b9ed77,PodSandboxId:64b6a247bda418bdef85ab8f9338f579644a50cb9a8fc0830819a4491588810c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701805217789497138,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-fp699,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c5ce67fa-2974-4a50-b268-c7cb5c386789,},Annotations:map[string]string{io.kubernetes.container.hash: 69b2b158,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5ad09ba49d2178f592cd0fcec154277efef7f7a767033b2ccebc8ec9163f05b,PodSandboxId:eb45659ad86cdc2727cf596e3cdba5c4524c2b7dab82e4f95ceacdad73063061,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701805083254474542,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-p25zv,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 9ec94079-0e4c-4256-8e7c-08a5876826ed,},An
notations:map[string]string{io.kubernetes.container.hash: 59fbd5d8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28adb15f8f7100f7f6e5fc0c6dcbcf2406eff926a5b7d327e07da3d19e090f9,PodSandboxId:ddf2dcdf860d90d8f187b8dead1e79e7226a3bbdb64e972ba397616c253a2713,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701805077620648780,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: b2d06cd4-9f3f-4f9d-a51b-aef82985ccb5,},Annotations:map[string]string{io.kubernetes.container.hash: ccb6b098,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7,PodSandboxId:41343de0b4b473b41f2d88e3f9e82e6a954288ecc8fa070cbc27f18c990f7357,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701805052175047974,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-v4pj4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 411241ae-93cb-4fe6-8768-1285d85fbbb1,},Annotations:map[string]string{io.kubernetes.container.hash: 932a9c98,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a8de113f71e729fb1cd850c935ca61d2b6169e3c7d0a68c81612a837999e309,PodSandboxId:9a60469a89015dfddebf24ca8eae888888ec6eb9e50e9c6220df4455e7fb79ea,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1c
f160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1701805036435438587,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-v8xhf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d159067e-9a77-4049-97c1-906907d960f8,},Annotations:map[string]string{io.kubernetes.container.hash: a2ea8ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee,PodSandboxId:e47ee8c404fa76d1032ce624e6c67459e98200e050da847afa1bf48986a1fed3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e3
99310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701804991810399096,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6cd3887-7b7b-4ac5-a7d7-1747a6c2ceed,},Annotations:map[string]string{io.kubernetes.container.hash: b9107026,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48,PodSandboxId:22028c39f0470afad81bb2d65d7d5c6efde039e4b79725617b8e07463bb5ccc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee4
5d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701804985364359357,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-69z6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a74a8-9584-44c6-a651-c58ff036bf8a,},Annotations:map[string]string{io.kubernetes.container.hash: 2ef61169,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772,PodSandboxId:5fa163c440eedcf71c174dee3eb07e4cb962cf9b90139e5e976cc7e6f30b04fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINE
R_RUNNING,CreatedAt:1701804974429817258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-bs76k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9235bef7-f927-40da-967d-19ee49cafa9d,},Annotations:map[string]string{io.kubernetes.container.hash: 93e12728,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9,PodSandboxId:64a5d6690e766bcb3b39bd0f6523d317d3e4f14e71900016d051e0b300ac19ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969
ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701804951545133089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12f70c4896789ed53271bb02472b801e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991,PodSandboxId:c9bea52f8102c894f6d45264de07889323dab41f32c319d7725c4d6210cd572e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:
7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701804951482957995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01797d2b0d69bcc78af189a94094d79,},Annotations:map[string]string{io.kubernetes.container.hash: b8668b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a,PodSandboxId:4bd684f0f43cfb3a29658283bb72585b3b1994ba641695f21c434a2094eaccf9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065
d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701804951338215141,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f19851b50a0aff1b2503b9727d3acc7a,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557,PodSandboxId:6a7a234c307e55a3fc58e9f0bec454c48b15a3e56ed5daa75ea55beb64425667,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900b
cc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701804951102856383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-489440,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 618c1c0d35e2300a6047024ad7716c29,},Annotations:map[string]string{io.kubernetes.container.hash: c4498d46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=23886876-bc54-41a5-a3a5-149dc18a8c9b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ae1985394f3bb       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            5 minutes ago       Running             hello-world-app           0                   64b6a247bda41       hello-world-app-5d77478584-fp699
	b5ad09ba49d21       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1              7 minutes ago       Running             headlamp                  0                   eb45659ad86cd       headlamp-777fd4b855-p25zv
	f28adb15f8f71       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    7 minutes ago       Running             nginx                     0                   ddf2dcdf860d9       nginx
	665ca839baff3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06       8 minutes ago       Running             gcp-auth                  0                   41343de0b4b47       gcp-auth-d4c87556c-v4pj4
	9a8de113f71e7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef   8 minutes ago       Running             local-path-provisioner    0                   9a60469a89015       local-path-provisioner-78b46b4d5c-v8xhf
	56da741b0e679       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   9 minutes ago       Running             storage-provisioner       0                   e47ee8c404fa7       storage-provisioner
	08af183c15119       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                   9 minutes ago       Running             kube-proxy                0                   22028c39f0470       kube-proxy-69z6s
	0f36356c251aa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                   9 minutes ago       Running             coredns                   0                   5fa163c440eed       coredns-5dd5756b68-bs76k
	3164ce1003424       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                   9 minutes ago       Running             kube-controller-manager   0                   64a5d6690e766       kube-controller-manager-addons-489440
	d96fc281b8835       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                   9 minutes ago       Running             kube-apiserver            0                   c9bea52f8102c       kube-apiserver-addons-489440
	16af3ce986c39       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                   9 minutes ago       Running             kube-scheduler            0                   4bd684f0f43cf       kube-scheduler-addons-489440
	2ef53e67c550a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                   9 minutes ago       Running             etcd                      0                   6a7a234c307e5       etcd-addons-489440
	
	* 
	* ==> coredns [0f36356c251aa63b064b90f4f2df322a23ba00098a28d099a64c850025e3f772] <==
	* [INFO] 10.244.0.7:55687 - 64799 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.001302398s
	[INFO] 10.244.0.7:36718 - 1494 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128365s
	[INFO] 10.244.0.7:36718 - 40147 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128884s
	[INFO] 10.244.0.7:40731 - 32294 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084429s
	[INFO] 10.244.0.7:40731 - 52260 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116427s
	[INFO] 10.244.0.7:59881 - 23520 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085358s
	[INFO] 10.244.0.7:59881 - 2030 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000271301s
	[INFO] 10.244.0.7:43856 - 17851 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081538s
	[INFO] 10.244.0.7:43856 - 11454 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109014s
	[INFO] 10.244.0.7:49609 - 8744 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067895s
	[INFO] 10.244.0.7:49609 - 60717 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094704s
	[INFO] 10.244.0.7:53791 - 49343 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067382s
	[INFO] 10.244.0.7:53791 - 1981 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082651s
	[INFO] 10.244.0.7:41446 - 7895 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067686s
	[INFO] 10.244.0.7:41446 - 16854 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010281s
	[INFO] 10.244.0.21:35627 - 21806 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000416384s
	[INFO] 10.244.0.21:45191 - 17360 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000199205s
	[INFO] 10.244.0.21:45028 - 26672 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111045s
	[INFO] 10.244.0.21:35200 - 9190 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103176s
	[INFO] 10.244.0.21:53219 - 22294 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123171s
	[INFO] 10.244.0.21:56835 - 33792 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064469s
	[INFO] 10.244.0.21:39544 - 17888 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000577125s
	[INFO] 10.244.0.21:39586 - 52841 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000412989s
	[INFO] 10.244.0.25:36971 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000361052s
	[INFO] 10.244.0.25:45599 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000256739s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-489440
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-489440
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=addons-489440
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_35_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-489440
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:35:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-489440
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:45:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:45:40 +0000   Tue, 05 Dec 2023 19:35:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:45:40 +0000   Tue, 05 Dec 2023 19:35:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:45:40 +0000   Tue, 05 Dec 2023 19:35:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:45:40 +0000   Tue, 05 Dec 2023 19:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    addons-489440
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f0caae5431a4ba69508657e9c16b9d8
	  System UUID:                3f0caae5-431a-4ba6-9508-657e9c16b9d8
	  Boot ID:                    15ec9711-35f8-4678-a5f1-f3ddfbade60f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-fp699           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  gadget                      gadget-78klf                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  gcp-auth                    gcp-auth-d4c87556c-v4pj4                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  headlamp                    headlamp-777fd4b855-p25zv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m52s
	  kube-system                 coredns-5dd5756b68-bs76k                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     9m37s
	  kube-system                 etcd-addons-489440                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         9m49s
	  kube-system                 kube-apiserver-addons-489440               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-controller-manager-addons-489440      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	  kube-system                 kube-proxy-69z6s                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 kube-scheduler-addons-489440               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  local-path-storage          local-path-provisioner-78b46b4d5c-v8xhf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m19s  kube-proxy       
	  Normal  Starting                 9m50s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m50s  kubelet          Node addons-489440 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m50s  kubelet          Node addons-489440 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m50s  kubelet          Node addons-489440 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m49s  kubelet          Node addons-489440 status is now: NodeReady
	  Normal  RegisteredNode           9m38s  node-controller  Node addons-489440 event: Registered Node addons-489440 in Controller
	
	* 
	* ==> dmesg <==
	* [  +3.431563] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155710] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.021639] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.377191] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.111156] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.144004] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.099375] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.231125] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +8.895945] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +8.781027] systemd-fstab-generator[1242]: Ignoring "noauto" for root device
	[Dec 5 19:36] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.595135] kauditd_printk_skb: 4 callbacks suppressed
	[ +22.334495] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.114683] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 5 19:37] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.594898] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.464956] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.087008] kauditd_printk_skb: 22 callbacks suppressed
	[Dec 5 19:38] kauditd_printk_skb: 9 callbacks suppressed
	[ +20.108139] kauditd_printk_skb: 7 callbacks suppressed
	[ +19.392247] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 5 19:40] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [2ef53e67c550af26b04ec61bf0c02aa45bfdc23ca23c2e1beac108616f50d557] <==
	* {"level":"info","ts":"2023-12-05T19:37:20.424217Z","caller":"traceutil/trace.go:171","msg":"trace[1886865848] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"407.372444ms","start":"2023-12-05T19:37:20.016838Z","end":"2023-12-05T19:37:20.424211Z","steps":["trace[1886865848] 'process raft request'  (duration: 407.066103ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:20.424317Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T19:37:20.01682Z","time spent":"407.439375ms","remote":"127.0.0.1:48214","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3395,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-hd2h6\" mod_revision:1049 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-hd2h6\" value_size:3336 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-create-hd2h6\" > >"}
	{"level":"warn","ts":"2023-12-05T19:37:20.424353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.853062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-05T19:37:20.424424Z","caller":"traceutil/trace.go:171","msg":"trace[423503306] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:1052; }","duration":"221.932048ms","start":"2023-12-05T19:37:20.202483Z","end":"2023-12-05T19:37:20.424415Z","steps":["trace[423503306] 'agreement among raft nodes before linearized reading'  (duration: 221.813129ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:20.424566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.380005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11289"}
	{"level":"info","ts":"2023-12-05T19:37:20.424666Z","caller":"traceutil/trace.go:171","msg":"trace[2122425548] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1052; }","duration":"223.48127ms","start":"2023-12-05T19:37:20.201178Z","end":"2023-12-05T19:37:20.424659Z","steps":["trace[2122425548] 'agreement among raft nodes before linearized reading'  (duration: 223.341639ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.196039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.794013ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17375885974416719506 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" value_size:541 lease:8152513937561942651 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T19:37:53.196176Z","caller":"traceutil/trace.go:171","msg":"trace[1640709176] linearizableReadLoop","detail":"{readStateIndex:1332; appliedIndex:1331; }","duration":"258.206067ms","start":"2023-12-05T19:37:52.937954Z","end":"2023-12-05T19:37:53.19616Z","steps":["trace[1640709176] 'read index received'  (duration: 89.079787ms)","trace[1640709176] 'applied index is now lower than readState.Index'  (duration: 169.124959ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T19:37:53.1964Z","caller":"traceutil/trace.go:171","msg":"trace[631264953] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"349.912594ms","start":"2023-12-05T19:37:52.846464Z","end":"2023-12-05T19:37:53.196376Z","steps":["trace[631264953] 'process raft request'  (duration: 180.606596ms)","trace[631264953] 'compare'  (duration: 168.465432ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T19:37:53.196458Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T19:37:52.84645Z","time spent":"349.977441ms","remote":"127.0.0.1:48186","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":614,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/nginx-ingress.179e06b27bd076ab\" value_size:541 lease:8152513937561942651 >> failure:<>"}
	{"level":"warn","ts":"2023-12-05T19:37:53.196616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.651416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2023-12-05T19:37:53.196634Z","caller":"traceutil/trace.go:171","msg":"trace[1166477007] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1290; }","duration":"258.701138ms","start":"2023-12-05T19:37:52.937927Z","end":"2023-12-05T19:37:53.196628Z","steps":["trace[1166477007] 'agreement among raft nodes before linearized reading'  (duration: 258.62355ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.197326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.69657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-proxy-wnn8h\" ","response":"range_response_count:1 size:3871"}
	{"level":"info","ts":"2023-12-05T19:37:53.197384Z","caller":"traceutil/trace.go:171","msg":"trace[1769871096] range","detail":"{range_begin:/registry/pods/kube-system/registry-proxy-wnn8h; range_end:; response_count:1; response_revision:1290; }","duration":"232.761266ms","start":"2023-12-05T19:37:52.964615Z","end":"2023-12-05T19:37:53.197376Z","steps":["trace[1769871096] 'agreement among raft nodes before linearized reading'  (duration: 232.619944ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.197551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.437686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" ","response":"range_response_count:1 size:8026"}
	{"level":"info","ts":"2023-12-05T19:37:53.197607Z","caller":"traceutil/trace.go:171","msg":"trace[1323183814] range","detail":"{range_begin:/registry/pods/gadget/; range_end:/registry/pods/gadget0; response_count:1; response_revision:1290; }","duration":"105.497239ms","start":"2023-12-05T19:37:53.092103Z","end":"2023-12-05T19:37:53.197601Z","steps":["trace[1323183814] 'agreement among raft nodes before linearized reading'  (duration: 105.409375ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.197822Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.203917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/nginx\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T19:37:53.197943Z","caller":"traceutil/trace.go:171","msg":"trace[868965987] range","detail":"{range_begin:/registry/pods/default/nginx; range_end:; response_count:0; response_revision:1290; }","duration":"182.327799ms","start":"2023-12-05T19:37:53.015609Z","end":"2023-12-05T19:37:53.197936Z","steps":["trace[868965987] 'agreement among raft nodes before linearized reading'  (duration: 182.189371ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T19:37:53.198073Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.862595ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-12-05T19:37:53.198111Z","caller":"traceutil/trace.go:171","msg":"trace[1477530667] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1290; }","duration":"219.903232ms","start":"2023-12-05T19:37:52.978202Z","end":"2023-12-05T19:37:53.198105Z","steps":["trace[1477530667] 'agreement among raft nodes before linearized reading'  (duration: 219.841613ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:38:01.881238Z","caller":"traceutil/trace.go:171","msg":"trace[418673253] linearizableReadLoop","detail":"{readStateIndex:1422; appliedIndex:1421; }","duration":"143.438738ms","start":"2023-12-05T19:38:01.737786Z","end":"2023-12-05T19:38:01.881225Z","steps":["trace[418673253] 'read index received'  (duration: 143.283808ms)","trace[418673253] 'applied index is now lower than readState.Index'  (duration: 154.463µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T19:38:01.881486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.776615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3753"}
	{"level":"info","ts":"2023-12-05T19:38:01.881547Z","caller":"traceutil/trace.go:171","msg":"trace[1754305013] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1378; }","duration":"143.852293ms","start":"2023-12-05T19:38:01.737685Z","end":"2023-12-05T19:38:01.881538Z","steps":["trace[1754305013] 'agreement among raft nodes before linearized reading'  (duration: 143.731227ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:38:01.881698Z","caller":"traceutil/trace.go:171","msg":"trace[1934344253] transaction","detail":"{read_only:false; response_revision:1378; number_of_response:1; }","duration":"258.040628ms","start":"2023-12-05T19:38:01.623651Z","end":"2023-12-05T19:38:01.881691Z","steps":["trace[1934344253] 'process raft request'  (duration: 257.462618ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T19:38:15.608136Z","caller":"traceutil/trace.go:171","msg":"trace[885706186] transaction","detail":"{read_only:false; response_revision:1428; number_of_response:1; }","duration":"213.577061ms","start":"2023-12-05T19:38:15.394529Z","end":"2023-12-05T19:38:15.608106Z","steps":["trace[885706186] 'process raft request'  (duration: 213.469345ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [665ca839baff3cad044133ebcb1d29c53e5462bc936606ccaa61dc28ea1dd7c7] <==
	* 2023/12/05 19:37:32 GCP Auth Webhook started!
	2023/12/05 19:37:37 Ready to marshal response ...
	2023/12/05 19:37:37 Ready to write response ...
	2023/12/05 19:37:37 Ready to marshal response ...
	2023/12/05 19:37:37 Ready to write response ...
	2023/12/05 19:37:46 Ready to marshal response ...
	2023/12/05 19:37:46 Ready to write response ...
	2023/12/05 19:37:47 Ready to marshal response ...
	2023/12/05 19:37:47 Ready to write response ...
	2023/12/05 19:37:53 Ready to marshal response ...
	2023/12/05 19:37:53 Ready to write response ...
	2023/12/05 19:37:56 Ready to marshal response ...
	2023/12/05 19:37:56 Ready to write response ...
	2023/12/05 19:37:56 Ready to marshal response ...
	2023/12/05 19:37:56 Ready to write response ...
	2023/12/05 19:37:56 Ready to marshal response ...
	2023/12/05 19:37:56 Ready to write response ...
	2023/12/05 19:38:10 Ready to marshal response ...
	2023/12/05 19:38:10 Ready to write response ...
	2023/12/05 19:38:13 Ready to marshal response ...
	2023/12/05 19:38:13 Ready to write response ...
	2023/12/05 19:38:26 Ready to marshal response ...
	2023/12/05 19:38:26 Ready to write response ...
	2023/12/05 19:40:15 Ready to marshal response ...
	2023/12/05 19:40:15 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:45:48 up 10 min,  0 users,  load average: 0.46, 0.74, 0.71
	Linux addons-489440 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d96fc281b8835d23aad27be2712bd2584dad513840e815099ceb0d326a54d991] <==
	* I1205 19:37:56.088994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.124.254"}
	I1205 19:38:00.183938       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1205 19:38:18.574570       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.118:8443->10.244.0.29:36362: read: connection reset by peer
	I1205 19:38:22.251263       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 19:38:42.744498       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.744692       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.768215       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.768328       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.782957       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.783022       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.797930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.798963       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.805649       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.806197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.820991       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.822371       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.836595       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.837079       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 19:38:42.852002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 19:38:42.852059       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1205 19:38:43.798856       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1205 19:38:43.852323       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 19:38:43.862777       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1205 19:40:15.462573       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.76.59"}
	I1205 19:40:55.271848       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [3164ce10034247349621a5bb8800f17f4696f11e6540f95b5c16c0e5a2b7b7b9] <==
	* E1205 19:42:44.793893       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:42:58.613385       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:42:58.613483       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:15.939047       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:15.939164       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:16.739019       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:16.739137       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:51.202505       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:51.202610       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:43:58.122940       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:43:58.122999       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:05.981591       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:05.981646       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:33.567678       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:33.567806       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:44:49.563245       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:44:49.563309       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:01.880403       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:01.880512       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:12.398850       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:12.398947       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:28.404581       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:28.404691       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1205 19:45:46.583822       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 19:45:46.584185       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [08af183c15119ad9d6d6980a5a1810c34ded5782e1934462c94925855a4f6c48] <==
	* I1205 19:36:28.976903       1 server_others.go:69] "Using iptables proxy"
	I1205 19:36:29.105915       1 node.go:141] Successfully retrieved node IP: 192.168.39.118
	I1205 19:36:29.594233       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 19:36:29.594280       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 19:36:29.625838       1 server_others.go:152] "Using iptables Proxier"
	I1205 19:36:29.625973       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 19:36:29.626347       1 server.go:846] "Version info" version="v1.28.4"
	I1205 19:36:29.626422       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:36:29.653843       1 config.go:188] "Starting service config controller"
	I1205 19:36:29.653954       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 19:36:29.654066       1 config.go:97] "Starting endpoint slice config controller"
	I1205 19:36:29.654120       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 19:36:29.675051       1 config.go:315] "Starting node config controller"
	I1205 19:36:29.675214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 19:36:29.766915       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 19:36:29.774044       1 shared_informer.go:318] Caches are synced for service config
	I1205 19:36:29.776793       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [16af3ce986c39994dfb00dc81c18c1972bd75681aad3819675fecd38eae2729a] <==
	* E1205 19:35:55.296377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:35:55.296387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:35:56.102000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:35:56.102104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:35:56.142592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:56.142831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:56.242471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:35:56.242531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 19:35:56.290559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:56.290644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 19:35:56.321003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:35:56.321052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:35:56.358373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:35:56.358424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:35:56.368989       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:35:56.369097       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:35:56.401269       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:35:56.401402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:35:56.422335       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:35:56.422419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 19:35:56.583641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:35:56.583806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:35:56.604049       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:35:56.604108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1205 19:35:58.284637       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 19:35:27 UTC, ends at Tue 2023-12-05 19:45:49 UTC. --
	Dec 05 19:45:11 addons-489440 kubelet[1249]: time="2023-12-05T19:45:11Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:11 addons-489440 kubelet[1249]: time="2023-12-05T19:45:11Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:11 addons-489440 kubelet[1249]: E1205 19:45:11.179654    1249 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:45:11Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:11Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:11Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:45:11Z\\\" level=error msg=\\\"runc create failed: unable to start container process: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-78klf" podUID="071e9d7c-a5e8-4d75-add6-8f136264b190"
	Dec 05 19:45:26 addons-489440 kubelet[1249]: E1205 19:45:26.162011    1249 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:45:26 addons-489440 kubelet[1249]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:45:26Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:26 addons-489440 kubelet[1249]:         time="2023-12-05T19:45:26Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:26 addons-489440 kubelet[1249]:         time="2023-12-05T19:45:26Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:26 addons-489440 kubelet[1249]:         time="2023-12-05T19:45:26Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:26 addons-489440 kubelet[1249]:  > podSandboxID="63c041f67019db62b09726892e0e2188a8fc341f92effaacb9d276c8e5b04d39"
	Dec 05 19:45:26 addons-489440 kubelet[1249]: E1205 19:45:26.162280    1249 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6rs8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-78klf_gadget(071e9d7c-a5e8-4d75-add6-8f136264b190): CreateContainerError: container create failed: time="2023-12-05T19:45:26Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:26 addons-489440 kubelet[1249]: time="2023-12-05T19:45:26Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:26 addons-489440 kubelet[1249]: time="2023-12-05T19:45:26Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:26 addons-489440 kubelet[1249]: time="2023-12-05T19:45:26Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:26 addons-489440 kubelet[1249]: E1205 19:45:26.162329    1249 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:45:26Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:26Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:26Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:45:26Z\\\" level=error msg=\\\"runc create failed: unable to start container process: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-78klf" podUID="071e9d7c-a5e8-4d75-add6-8f136264b190"
	Dec 05 19:45:40 addons-489440 kubelet[1249]: E1205 19:45:40.185976    1249 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err=<
	Dec 05 19:45:40 addons-489440 kubelet[1249]:         rpc error: code = Unknown desc = container create failed: time="2023-12-05T19:45:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:40 addons-489440 kubelet[1249]:         time="2023-12-05T19:45:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:40 addons-489440 kubelet[1249]:         time="2023-12-05T19:45:40Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:40 addons-489440 kubelet[1249]:         time="2023-12-05T19:45:40Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:40 addons-489440 kubelet[1249]:  > podSandboxID="63c041f67019db62b09726892e0e2188a8fc341f92effaacb9d276c8e5b04d39"
	Dec 05 19:45:40 addons-489440 kubelet[1249]: E1205 19:45:40.186177    1249 kuberuntime_manager.go:1261] container &Container{Name:gadget,Image:ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931,Command:[/entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_POD_UID,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.uid,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVers
ion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:TRACELOOP_POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:GADGET_IMAGE,Value:ghcr.io/inspektor-gadget/inspektor-gadget,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_VERSION,Value:v0.16.1,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_HOOK_MODE,Value:auto,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_OPTION_FALLBACK_POD_INFORMER,Value:true,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CONTAINERD_SOCKETPATH,Value:/run/containerd/containerd.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_CRIO_SOCKETPATH,Value:/run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:INSPEKTOR_GADGET_DOCKER_SOCKETPATH,Value:/run/docker.sock,ValueFrom:nil,},EnvVar{Name:HOST_ROOT,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Clai
ms:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:run,ReadOnly:false,MountPath:/run,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:modules,ReadOnly:false,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:debugfs,ReadOnly:false,MountPath:/sys/kernel/debug,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cgroup,ReadOnly:false,MountPath:/sys/fs/cgroup,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:bpffs,ReadOnly:false,MountPath:/sys/fs/bpf,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-6rs8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,Pe
riodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/gadgettracermanager -liveness],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYSLOG SYS_PTRACE SYS_RESOURCE IPC_LOCK SYS_MODULE NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod gadget-78klf_gadget(071e9d7c-a5e8-4d75-add6-8f136264b190): CreateContainerError: container create failed: time="2023-12-05T19:45:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:40 addons-489440 kubelet[1249]: time="2023-12-05T19:45:40Z" level=warning msg="cannot toggle freezer: cgroups not configured for container"
	Dec 05 19:45:40 addons-489440 kubelet[1249]: time="2023-12-05T19:45:40Z" level=warning msg="lstat : no such file or directory"
	Dec 05 19:45:40 addons-489440 kubelet[1249]: time="2023-12-05T19:45:40Z" level=error msg="runc create failed: unable to start container process: exec: \"/entrypoint.sh\": stat /entrypoint.sh: no such file or directory"
	Dec 05 19:45:40 addons-489440 kubelet[1249]: E1205 19:45:40.186241    1249 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CreateContainerError: \"container create failed: time=\\\"2023-12-05T19:45:40Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:40Z\\\" level=warning msg=\\\"cannot toggle freezer: cgroups not configured for container\\\"\\ntime=\\\"2023-12-05T19:45:40Z\\\" level=warning msg=\\\"lstat : no such file or directory\\\"\\ntime=\\\"2023-12-05T19:45:40Z\\\" level=error msg=\\\"runc create failed: unable to start container process: exec: \\\\\\\"/entrypoint.sh\\\\\\\": stat /entrypoint.sh: no such file or directory\\\"\\n\"" pod="gadget/gadget-78klf" podUID="071e9d7c-a5e8-4d75-add6-8f136264b190"
	
	* 
	* ==> storage-provisioner [56da741b0e67906ca4f6ef411406a35930e125d65d9a1431659f92c1a401aeee] <==
	* I1205 19:36:32.926014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:36:33.131438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:36:33.131531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:36:33.222291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"28a48f8f-9a40-4c83-9762-47cbb70f03c4", APIVersion:"v1", ResourceVersion:"832", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-489440_bcd1d510-b3d5-4fda-ae7b-0c5df7b93e41 became leader
	I1205 19:36:33.222425       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:36:33.234546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-489440_bcd1d510-b3d5-4fda-ae7b-0c5df7b93e41!
	I1205 19:36:33.437109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-489440_bcd1d510-b3d5-4fda-ae7b-0c5df7b93e41!
	E1205 19:38:35.115815       1 controller.go:1050] claim "a0836ea5-43b5-48bb-8971-f863de02e22c" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-489440 -n addons-489440
helpers_test.go:261: (dbg) Run:  kubectl --context addons-489440 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: gadget-78klf
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-489440 describe pod gadget-78klf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-489440 describe pod gadget-78klf: exit status 1 (64.615142ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gadget-78klf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-489440 describe pod gadget-78klf: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (482.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-489440
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-489440: exit status 82 (2m1.480928282s)

                                                
                                                
-- stdout --
	* Stopping node "addons-489440"  ...
	* Stopping node "addons-489440"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-489440" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-489440
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-489440: exit status 11 (21.504744049s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-489440" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-489440
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-489440: exit status 11 (6.142860887s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-489440" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-489440
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-489440: exit status 11 (6.146725763s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-489440" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-341707
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image load --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr
2023/12/05 19:53:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image load --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr: (8.713280113s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image ls: (2.275957107s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-341707" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (166.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-376951 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1205 19:55:20.904206   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-376951 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.104735581s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-376951 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-376951 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6e7831e5-e75f-490e-af6e-9525192ab3a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6e7831e5-e75f-490e-af6e-9525192ab3a6] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.015512472s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1205 19:57:37.060556   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:57:46.653810   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:46.659067   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:46.669345   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:46.689643   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:46.729979   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-376951 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.226057613s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-376951 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
E1205 19:57:46.810125   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:46.970661   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.244
E1205 19:57:47.291397   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons disable ingress-dns --alsologtostderr -v=1
E1205 19:57:47.932221   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:49.212787   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:57:51.774582   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons disable ingress-dns --alsologtostderr -v=1: (6.861428252s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons disable ingress --alsologtostderr -v=1
E1205 19:57:56.895532   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons disable ingress --alsologtostderr -v=1: (7.556025825s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-376951 -n ingress-addon-legacy-376951
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376951 logs -n 25: (1.223218529s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-341707 image ls                                                | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	| image          | functional-341707 image save                                              | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-341707                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707 image rm                                                | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-341707                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707 image ls                                                | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	| image          | functional-341707 image load                                              | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707 image ls                                                | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	| image          | functional-341707 image save --daemon                                     | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-341707                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-341707 ssh pgrep                                               | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707 image build -t                                          | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | localhost/my-image:functional-341707                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-341707                                                         | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-341707 image ls                                                | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	| delete         | -p functional-341707                                                      | functional-341707           | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:53 UTC |
	| start          | -p ingress-addon-legacy-376951                                            | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:53 UTC | 05 Dec 23 19:55 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-376951                                               | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:55 UTC | 05 Dec 23 19:55 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-376951                                               | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:55 UTC | 05 Dec 23 19:55 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-376951                                               | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:55 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-376951 ip                                            | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:57 UTC | 05 Dec 23 19:57 UTC |
	| addons         | ingress-addon-legacy-376951                                               | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:57 UTC | 05 Dec 23 19:57 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-376951                                               | ingress-addon-legacy-376951 | jenkins | v1.32.0 | 05 Dec 23 19:57 UTC | 05 Dec 23 19:58 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:53:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:53:38.488922   22681 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:53:38.489148   22681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:53:38.489156   22681 out.go:309] Setting ErrFile to fd 2...
	I1205 19:53:38.489161   22681 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:53:38.489361   22681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 19:53:38.489961   22681 out.go:303] Setting JSON to false
	I1205 19:53:38.490823   22681 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2171,"bootTime":1701803847,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:53:38.490880   22681 start.go:138] virtualization: kvm guest
	I1205 19:53:38.493073   22681 out.go:177] * [ingress-addon-legacy-376951] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:53:38.494549   22681 notify.go:220] Checking for updates...
	I1205 19:53:38.494554   22681 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:53:38.496094   22681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:53:38.497539   22681 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:53:38.499032   22681 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:53:38.500455   22681 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:53:38.501783   22681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:53:38.503372   22681 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:53:38.536895   22681 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 19:53:38.538294   22681 start.go:298] selected driver: kvm2
	I1205 19:53:38.538311   22681 start.go:902] validating driver "kvm2" against <nil>
	I1205 19:53:38.538324   22681 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:53:38.539013   22681 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:53:38.539092   22681 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:53:38.553708   22681 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 19:53:38.553752   22681 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:53:38.553972   22681 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 19:53:38.553998   22681 cni.go:84] Creating CNI manager for ""
	I1205 19:53:38.554006   22681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:53:38.554022   22681 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:53:38.554035   22681 start_flags.go:323] config:
	{Name:ingress-addon-legacy-376951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-376951 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:53:38.554197   22681 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:53:38.556029   22681 out.go:177] * Starting control plane node ingress-addon-legacy-376951 in cluster ingress-addon-legacy-376951
	I1205 19:53:38.557302   22681 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:53:38.580245   22681 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1205 19:53:38.580290   22681 cache.go:56] Caching tarball of preloaded images
	I1205 19:53:38.580431   22681 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:53:38.582096   22681 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1205 19:53:38.583362   22681 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:53:38.610700   22681 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1205 19:53:43.145817   22681 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:53:43.145909   22681 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:53:44.260293   22681 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1205 19:53:44.260673   22681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/config.json ...
	I1205 19:53:44.260717   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/config.json: {Name:mk584916e39815f450b5be06440e33a0a4c90222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:53:44.260937   22681 start.go:365] acquiring machines lock for ingress-addon-legacy-376951: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 19:53:44.260989   22681 start.go:369] acquired machines lock for "ingress-addon-legacy-376951" in 31.106µs
	I1205 19:53:44.261012   22681 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-376951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-376951 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:53:44.261117   22681 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 19:53:44.263274   22681 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1205 19:53:44.263452   22681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:53:44.263485   22681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:53:44.277387   22681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37007
	I1205 19:53:44.277762   22681 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:53:44.278294   22681 main.go:141] libmachine: Using API Version  1
	I1205 19:53:44.278318   22681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:53:44.278636   22681 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:53:44.278844   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetMachineName
	I1205 19:53:44.279012   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:53:44.279151   22681 start.go:159] libmachine.API.Create for "ingress-addon-legacy-376951" (driver="kvm2")
	I1205 19:53:44.279176   22681 client.go:168] LocalClient.Create starting
	I1205 19:53:44.279211   22681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem
	I1205 19:53:44.279250   22681 main.go:141] libmachine: Decoding PEM data...
	I1205 19:53:44.279272   22681 main.go:141] libmachine: Parsing certificate...
	I1205 19:53:44.279344   22681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem
	I1205 19:53:44.279370   22681 main.go:141] libmachine: Decoding PEM data...
	I1205 19:53:44.279388   22681 main.go:141] libmachine: Parsing certificate...
	I1205 19:53:44.279421   22681 main.go:141] libmachine: Running pre-create checks...
	I1205 19:53:44.279437   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .PreCreateCheck
	I1205 19:53:44.279742   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetConfigRaw
	I1205 19:53:44.280084   22681 main.go:141] libmachine: Creating machine...
	I1205 19:53:44.280102   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .Create
	I1205 19:53:44.280231   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Creating KVM machine...
	I1205 19:53:44.281500   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found existing default KVM network
	I1205 19:53:44.282149   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:44.282022   22714 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I1205 19:53:44.287397   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | trying to create private KVM network mk-ingress-addon-legacy-376951 192.168.39.0/24...
	I1205 19:53:44.356269   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting up store path in /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951 ...
	I1205 19:53:44.356308   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Building disk image from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 19:53:44.356322   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | private KVM network mk-ingress-addon-legacy-376951 192.168.39.0/24 created
	I1205 19:53:44.356387   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:44.356184   22714 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:53:44.356444   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Downloading /home/jenkins/minikube-integration/17731-6237/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1205 19:53:44.558557   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:44.558400   22714 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa...
	I1205 19:53:44.724148   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:44.724001   22714 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/ingress-addon-legacy-376951.rawdisk...
	I1205 19:53:44.724173   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Writing magic tar header
	I1205 19:53:44.724188   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Writing SSH key tar header
	I1205 19:53:44.724204   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:44.724136   22714 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951 ...
	I1205 19:53:44.724232   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951
	I1205 19:53:44.724258   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951 (perms=drwx------)
	I1205 19:53:44.724278   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines (perms=drwxr-xr-x)
	I1205 19:53:44.724286   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube (perms=drwxr-xr-x)
	I1205 19:53:44.724296   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237 (perms=drwxrwxr-x)
	I1205 19:53:44.724342   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines
	I1205 19:53:44.724373   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:53:44.724394   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 19:53:44.724429   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 19:53:44.724444   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Creating domain...
	I1205 19:53:44.724482   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237
	I1205 19:53:44.724516   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 19:53:44.724534   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home/jenkins
	I1205 19:53:44.724546   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Checking permissions on dir: /home
	I1205 19:53:44.724561   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Skipping /home - not owner
	I1205 19:53:44.726415   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) define libvirt domain using xml: 
	I1205 19:53:44.726440   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) <domain type='kvm'>
	I1205 19:53:44.726453   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <name>ingress-addon-legacy-376951</name>
	I1205 19:53:44.726465   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <memory unit='MiB'>4096</memory>
	I1205 19:53:44.726481   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <vcpu>2</vcpu>
	I1205 19:53:44.726494   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <features>
	I1205 19:53:44.726508   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <acpi/>
	I1205 19:53:44.726520   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <apic/>
	I1205 19:53:44.726530   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <pae/>
	I1205 19:53:44.726535   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     
	I1205 19:53:44.726558   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   </features>
	I1205 19:53:44.726581   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <cpu mode='host-passthrough'>
	I1205 19:53:44.726597   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   
	I1205 19:53:44.726607   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   </cpu>
	I1205 19:53:44.726621   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <os>
	I1205 19:53:44.726641   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <type>hvm</type>
	I1205 19:53:44.726666   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <boot dev='cdrom'/>
	I1205 19:53:44.726688   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <boot dev='hd'/>
	I1205 19:53:44.726704   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <bootmenu enable='no'/>
	I1205 19:53:44.726716   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   </os>
	I1205 19:53:44.726723   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   <devices>
	I1205 19:53:44.726730   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <disk type='file' device='cdrom'>
	I1205 19:53:44.726749   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/boot2docker.iso'/>
	I1205 19:53:44.726769   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <target dev='hdc' bus='scsi'/>
	I1205 19:53:44.726784   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <readonly/>
	I1205 19:53:44.726795   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </disk>
	I1205 19:53:44.726807   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <disk type='file' device='disk'>
	I1205 19:53:44.726820   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 19:53:44.726838   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/ingress-addon-legacy-376951.rawdisk'/>
	I1205 19:53:44.726855   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <target dev='hda' bus='virtio'/>
	I1205 19:53:44.726870   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </disk>
	I1205 19:53:44.726883   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <interface type='network'>
	I1205 19:53:44.726898   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <source network='mk-ingress-addon-legacy-376951'/>
	I1205 19:53:44.726911   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <model type='virtio'/>
	I1205 19:53:44.726929   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </interface>
	I1205 19:53:44.726947   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <interface type='network'>
	I1205 19:53:44.726961   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <source network='default'/>
	I1205 19:53:44.726974   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <model type='virtio'/>
	I1205 19:53:44.726986   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </interface>
	I1205 19:53:44.726995   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <serial type='pty'>
	I1205 19:53:44.727002   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <target port='0'/>
	I1205 19:53:44.727010   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </serial>
	I1205 19:53:44.727016   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <console type='pty'>
	I1205 19:53:44.727028   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <target type='serial' port='0'/>
	I1205 19:53:44.727044   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </console>
	I1205 19:53:44.727058   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     <rng model='virtio'>
	I1205 19:53:44.727093   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)       <backend model='random'>/dev/random</backend>
	I1205 19:53:44.727109   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     </rng>
	I1205 19:53:44.727120   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     
	I1205 19:53:44.727128   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)     
	I1205 19:53:44.727138   22681 main.go:141] libmachine: (ingress-addon-legacy-376951)   </devices>
	I1205 19:53:44.727151   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) </domain>
	I1205 19:53:44.727168   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) 
	I1205 19:53:44.731557   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:34:27:d7 in network default
	I1205 19:53:44.732068   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Ensuring networks are active...
	I1205 19:53:44.732101   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:44.732699   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Ensuring network default is active
	I1205 19:53:44.732943   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Ensuring network mk-ingress-addon-legacy-376951 is active
	I1205 19:53:44.733508   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Getting domain xml...
	I1205 19:53:44.734259   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Creating domain...
	I1205 19:53:45.959866   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Waiting to get IP...
	I1205 19:53:45.960773   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:45.961117   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:45.961193   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:45.961128   22714 retry.go:31] will retry after 200.856577ms: waiting for machine to come up
	I1205 19:53:46.163645   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:46.164102   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:46.164133   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:46.164055   22714 retry.go:31] will retry after 309.850109ms: waiting for machine to come up
	I1205 19:53:46.475605   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:46.475995   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:46.476022   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:46.475976   22714 retry.go:31] will retry after 461.190371ms: waiting for machine to come up
	I1205 19:53:46.938511   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:46.938940   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:46.938971   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:46.938868   22714 retry.go:31] will retry after 435.794912ms: waiting for machine to come up
	I1205 19:53:47.376385   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:47.376758   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:47.376782   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:47.376681   22714 retry.go:31] will retry after 588.913299ms: waiting for machine to come up
	I1205 19:53:47.967426   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:47.967845   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:47.967877   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:47.967764   22714 retry.go:31] will retry after 789.464074ms: waiting for machine to come up
	I1205 19:53:48.758620   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:48.759070   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:48.759104   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:48.759009   22714 retry.go:31] will retry after 1.020526766s: waiting for machine to come up
	I1205 19:53:49.781261   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:49.781707   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:49.781736   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:49.781658   22714 retry.go:31] will retry after 1.323328685s: waiting for machine to come up
	I1205 19:53:51.107049   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:51.107396   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:51.107435   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:51.107377   22714 retry.go:31] will retry after 1.338999156s: waiting for machine to come up
	I1205 19:53:52.447500   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:52.447948   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:52.447987   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:52.447894   22714 retry.go:31] will retry after 1.592302967s: waiting for machine to come up
	I1205 19:53:54.041681   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:54.042185   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:54.042212   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:54.042098   22714 retry.go:31] will retry after 2.402526s: waiting for machine to come up
	I1205 19:53:56.447497   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:56.447864   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:56.447892   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:56.447821   22714 retry.go:31] will retry after 2.816874912s: waiting for machine to come up
	I1205 19:53:59.267672   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:53:59.268138   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:53:59.268174   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:53:59.268076   22714 retry.go:31] will retry after 3.492911943s: waiting for machine to come up
	I1205 19:54:02.764909   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:02.765282   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find current IP address of domain ingress-addon-legacy-376951 in network mk-ingress-addon-legacy-376951
	I1205 19:54:02.765308   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | I1205 19:54:02.765229   22714 retry.go:31] will retry after 5.546824514s: waiting for machine to come up
	I1205 19:54:08.316321   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:08.316726   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Found IP for machine: 192.168.39.244
	I1205 19:54:08.316759   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has current primary IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:08.316770   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Reserving static IP address...
	I1205 19:54:08.317127   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-376951", mac: "52:54:00:e0:0e:cc", ip: "192.168.39.244"} in network mk-ingress-addon-legacy-376951
	I1205 19:54:08.387797   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Getting to WaitForSSH function...
	I1205 19:54:08.387831   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Reserved static IP address: 192.168.39.244
	I1205 19:54:08.387846   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Waiting for SSH to be available...
	I1205 19:54:08.390462   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:08.390762   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951
	I1205 19:54:08.390792   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-376951 interface with MAC address 52:54:00:e0:0e:cc
	I1205 19:54:08.390871   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Using SSH client type: external
	I1205 19:54:08.390910   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa (-rw-------)
	I1205 19:54:08.391034   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:54:08.391058   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | About to run SSH command:
	I1205 19:54:08.391095   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | exit 0
	I1205 19:54:08.394862   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | SSH cmd err, output: exit status 255: 
	I1205 19:54:08.394878   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 19:54:08.394886   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | command : exit 0
	I1205 19:54:08.394892   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | err     : exit status 255
	I1205 19:54:08.394899   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | output  : 
	I1205 19:54:11.395175   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Getting to WaitForSSH function...
	I1205 19:54:11.397430   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.397799   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:11.397823   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.397993   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Using SSH client type: external
	I1205 19:54:11.398016   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa (-rw-------)
	I1205 19:54:11.398045   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 19:54:11.398069   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | About to run SSH command:
	I1205 19:54:11.398084   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | exit 0
	I1205 19:54:11.489807   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | SSH cmd err, output: <nil>: 
	I1205 19:54:11.490071   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) KVM machine creation complete!
	I1205 19:54:11.490410   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetConfigRaw
	I1205 19:54:11.491007   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:11.491213   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:11.491359   22681 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 19:54:11.491378   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetState
	I1205 19:54:11.492579   22681 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 19:54:11.492593   22681 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 19:54:11.492599   22681 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 19:54:11.492609   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:11.494817   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.495159   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:11.495185   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.495374   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:11.495540   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.495675   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.495809   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:11.495978   22681 main.go:141] libmachine: Using SSH client type: native
	I1205 19:54:11.496300   22681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1205 19:54:11.496311   22681 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 19:54:11.617655   22681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:54:11.617682   22681 main.go:141] libmachine: Detecting the provisioner...
	I1205 19:54:11.617690   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:11.620543   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.620891   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:11.620925   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.621113   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:11.621319   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.621513   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.621620   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:11.621794   22681 main.go:141] libmachine: Using SSH client type: native
	I1205 19:54:11.622097   22681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1205 19:54:11.622111   22681 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 19:54:11.743300   22681 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1205 19:54:11.743359   22681 main.go:141] libmachine: found compatible host: buildroot
	I1205 19:54:11.743367   22681 main.go:141] libmachine: Provisioning with buildroot...
	I1205 19:54:11.743375   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetMachineName
	I1205 19:54:11.743648   22681 buildroot.go:166] provisioning hostname "ingress-addon-legacy-376951"
	I1205 19:54:11.743684   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetMachineName
	I1205 19:54:11.743898   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:11.746651   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.747001   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:11.747031   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.747138   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:11.747312   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.747486   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.747593   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:11.747738   22681 main.go:141] libmachine: Using SSH client type: native
	I1205 19:54:11.748152   22681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1205 19:54:11.748168   22681 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-376951 && echo "ingress-addon-legacy-376951" | sudo tee /etc/hostname
	I1205 19:54:11.883579   22681 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-376951
	
	I1205 19:54:11.883608   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:11.886383   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.886724   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:11.886759   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:11.886902   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:11.887097   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.887268   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:11.887375   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:11.887529   22681 main.go:141] libmachine: Using SSH client type: native
	I1205 19:54:11.887829   22681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1205 19:54:11.887847   22681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-376951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-376951/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-376951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 19:54:12.018857   22681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 19:54:12.018888   22681 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 19:54:12.018910   22681 buildroot.go:174] setting up certificates
	I1205 19:54:12.018924   22681 provision.go:83] configureAuth start
	I1205 19:54:12.018937   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetMachineName
	I1205 19:54:12.019206   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetIP
	I1205 19:54:12.021850   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.022182   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.022205   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.022350   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.024381   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.024692   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.024717   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.024816   22681 provision.go:138] copyHostCerts
	I1205 19:54:12.024847   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 19:54:12.024881   22681 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 19:54:12.024902   22681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 19:54:12.024983   22681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 19:54:12.025073   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 19:54:12.025100   22681 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 19:54:12.025107   22681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 19:54:12.025147   22681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 19:54:12.025205   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 19:54:12.025228   22681 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 19:54:12.025237   22681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 19:54:12.025268   22681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 19:54:12.025333   22681 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-376951 san=[192.168.39.244 192.168.39.244 localhost 127.0.0.1 minikube ingress-addon-legacy-376951]
	I1205 19:54:12.169860   22681 provision.go:172] copyRemoteCerts
	I1205 19:54:12.169934   22681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 19:54:12.169962   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.172765   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.173139   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.173170   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.173337   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:12.173531   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.173693   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:12.173841   22681 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa Username:docker}
	I1205 19:54:12.263037   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 19:54:12.263113   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 19:54:12.286464   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 19:54:12.286525   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 19:54:12.309916   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 19:54:12.309981   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 19:54:12.332898   22681 provision.go:86] duration metric: configureAuth took 313.962927ms
	I1205 19:54:12.332921   22681 buildroot.go:189] setting minikube options for container-runtime
	I1205 19:54:12.333120   22681 config.go:182] Loaded profile config "ingress-addon-legacy-376951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1205 19:54:12.333204   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.335736   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.336085   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.336116   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.336413   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:12.336614   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.336770   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.336910   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:12.337064   22681 main.go:141] libmachine: Using SSH client type: native
	I1205 19:54:12.337361   22681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1205 19:54:12.337378   22681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 19:54:12.664331   22681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 19:54:12.664354   22681 main.go:141] libmachine: Checking connection to Docker...
	I1205 19:54:12.664366   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetURL
	I1205 19:54:12.665580   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Using libvirt version 6000000
	I1205 19:54:12.667808   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.668127   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.668160   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.668284   22681 main.go:141] libmachine: Docker is up and running!
	I1205 19:54:12.668299   22681 main.go:141] libmachine: Reticulating splines...
	I1205 19:54:12.668307   22681 client.go:171] LocalClient.Create took 28.389121176s
	I1205 19:54:12.668327   22681 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-376951" took 28.389179043s
	I1205 19:54:12.668339   22681 start.go:300] post-start starting for "ingress-addon-legacy-376951" (driver="kvm2")
	I1205 19:54:12.668349   22681 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 19:54:12.668365   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:12.668592   22681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 19:54:12.668612   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.670509   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.670767   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.670794   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.670922   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:12.671091   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.671224   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:12.671342   22681 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa Username:docker}
	I1205 19:54:12.759378   22681 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 19:54:12.763890   22681 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 19:54:12.763914   22681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 19:54:12.763970   22681 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 19:54:12.764035   22681 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 19:54:12.764044   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /etc/ssl/certs/134102.pem
	I1205 19:54:12.764135   22681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 19:54:12.772800   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 19:54:12.798509   22681 start.go:303] post-start completed in 130.156583ms
	I1205 19:54:12.798560   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetConfigRaw
	I1205 19:54:12.799160   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetIP
	I1205 19:54:12.801807   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.802123   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.802151   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.802355   22681 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/config.json ...
	I1205 19:54:12.802576   22681 start.go:128] duration metric: createHost completed in 28.5414466s
	I1205 19:54:12.802605   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.804882   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.805232   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.805267   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.805414   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:12.805604   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.805771   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.805885   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:12.806027   22681 main.go:141] libmachine: Using SSH client type: native
	I1205 19:54:12.806344   22681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1205 19:54:12.806356   22681 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 19:54:12.927043   22681 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701806052.914983915
	
	I1205 19:54:12.927158   22681 fix.go:206] guest clock: 1701806052.914983915
	I1205 19:54:12.927182   22681 fix.go:219] Guest: 2023-12-05 19:54:12.914983915 +0000 UTC Remote: 2023-12-05 19:54:12.802589971 +0000 UTC m=+34.361580834 (delta=112.393944ms)
	I1205 19:54:12.927210   22681 fix.go:190] guest clock delta is within tolerance: 112.393944ms
	I1205 19:54:12.927217   22681 start.go:83] releasing machines lock for "ingress-addon-legacy-376951", held for 28.666218819s
	I1205 19:54:12.927251   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:12.927555   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetIP
	I1205 19:54:12.930314   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.930695   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.930724   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.930864   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:12.931502   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:12.931694   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:12.931786   22681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 19:54:12.931827   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.931917   22681 ssh_runner.go:195] Run: cat /version.json
	I1205 19:54:12.931950   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:12.934463   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.934493   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.934751   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.934777   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.934804   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:12.934827   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:12.934867   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:12.935036   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:12.935038   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.935266   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:12.935271   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:12.935430   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:12.935431   22681 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa Username:docker}
	I1205 19:54:12.935563   22681 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa Username:docker}
	I1205 19:54:13.045556   22681 ssh_runner.go:195] Run: systemctl --version
	I1205 19:54:13.051634   22681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 19:54:13.207963   22681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 19:54:13.214460   22681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 19:54:13.214522   22681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 19:54:13.229236   22681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 19:54:13.229259   22681 start.go:475] detecting cgroup driver to use...
	I1205 19:54:13.229310   22681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 19:54:13.243795   22681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 19:54:13.255720   22681 docker.go:203] disabling cri-docker service (if available) ...
	I1205 19:54:13.255767   22681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 19:54:13.268276   22681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 19:54:13.280300   22681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 19:54:13.382115   22681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 19:54:13.508370   22681 docker.go:219] disabling docker service ...
	I1205 19:54:13.508423   22681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 19:54:13.521930   22681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 19:54:13.534002   22681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 19:54:13.656996   22681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 19:54:13.778484   22681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 19:54:13.791211   22681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 19:54:13.808232   22681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 19:54:13.808289   22681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:54:13.817843   22681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 19:54:13.817892   22681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:54:13.827655   22681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:54:13.837309   22681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 19:54:13.847064   22681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 19:54:13.857224   22681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 19:54:13.865987   22681 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 19:54:13.866034   22681 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 19:54:13.880133   22681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 19:54:13.889276   22681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 19:54:14.004539   22681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 19:54:14.171482   22681 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 19:54:14.171550   22681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 19:54:14.176619   22681 start.go:543] Will wait 60s for crictl version
	I1205 19:54:14.176682   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:14.180978   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 19:54:14.224830   22681 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 19:54:14.224908   22681 ssh_runner.go:195] Run: crio --version
	I1205 19:54:14.275768   22681 ssh_runner.go:195] Run: crio --version
	I1205 19:54:14.329926   22681 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1205 19:54:14.331318   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetIP
	I1205 19:54:14.334055   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:14.334410   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:14.334443   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:14.334622   22681 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 19:54:14.338527   22681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:54:14.350308   22681 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1205 19:54:14.350355   22681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:54:14.384448   22681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1205 19:54:14.384505   22681 ssh_runner.go:195] Run: which lz4
	I1205 19:54:14.388109   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 19:54:14.388195   22681 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 19:54:14.392130   22681 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 19:54:14.392156   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1205 19:54:16.314092   22681 crio.go:444] Took 1.925924 seconds to copy over tarball
	I1205 19:54:16.314182   22681 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 19:54:19.585134   22681 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.270918991s)
	I1205 19:54:19.585180   22681 crio.go:451] Took 3.271057 seconds to extract the tarball
	I1205 19:54:19.585194   22681 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 19:54:19.629404   22681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 19:54:19.683397   22681 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1205 19:54:19.683426   22681 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 19:54:19.683477   22681 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:54:19.683512   22681 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:54:19.683563   22681 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:54:19.683578   22681 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1205 19:54:19.683768   22681 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:54:19.683786   22681 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1205 19:54:19.683824   22681 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:54:19.683838   22681 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:54:19.684958   22681 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:54:19.684987   22681 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 19:54:19.684995   22681 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:54:19.685007   22681 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:54:19.684958   22681 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:54:19.685032   22681 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:54:19.684962   22681 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:54:19.684961   22681 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1205 19:54:19.841279   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:54:19.844892   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1205 19:54:19.858446   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1205 19:54:19.860324   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:54:19.862827   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:54:19.881752   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:54:19.919828   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 19:54:19.945137   22681 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1205 19:54:19.945188   22681 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:54:19.945242   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:19.967186   22681 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1205 19:54:19.967232   22681 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1205 19:54:19.967278   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:20.006681   22681 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1205 19:54:20.006726   22681 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1205 19:54:20.006776   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:20.012957   22681 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1205 19:54:20.013005   22681 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:54:20.013060   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:20.017580   22681 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:54:20.043866   22681 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1205 19:54:20.043910   22681 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:54:20.043968   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:20.051041   22681 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1205 19:54:20.051074   22681 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:54:20.051117   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:20.051120   22681 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 19:54:20.051152   22681 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 19:54:20.051188   22681 ssh_runner.go:195] Run: which crictl
	I1205 19:54:20.051216   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1205 19:54:20.051247   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1205 19:54:20.051294   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1205 19:54:20.051301   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1205 19:54:20.255062   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1205 19:54:20.255108   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1205 19:54:20.255198   22681 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 19:54:20.255286   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1205 19:54:20.255342   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1205 19:54:20.255419   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1205 19:54:20.255462   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1205 19:54:20.324086   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1205 19:54:20.324172   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1205 19:54:20.330831   22681 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 19:54:20.330890   22681 cache_images.go:92] LoadImages completed in 647.449927ms
	W1205 19:54:20.330968   22681 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1205 19:54:20.331041   22681 ssh_runner.go:195] Run: crio config
	I1205 19:54:20.389671   22681 cni.go:84] Creating CNI manager for ""
	I1205 19:54:20.389695   22681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:54:20.389714   22681 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 19:54:20.389735   22681 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-376951 NodeName:ingress-addon-legacy-376951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 19:54:20.389889   22681 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-376951"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 19:54:20.389980   22681 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-376951 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-376951 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 19:54:20.390040   22681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1205 19:54:20.399059   22681 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 19:54:20.399127   22681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 19:54:20.407170   22681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I1205 19:54:20.422540   22681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1205 19:54:20.438210   22681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I1205 19:54:20.453766   22681 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I1205 19:54:20.457584   22681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 19:54:20.470450   22681 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951 for IP: 192.168.39.244
	I1205 19:54:20.470481   22681 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.470613   22681 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 19:54:20.470647   22681 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 19:54:20.470714   22681 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.key
	I1205 19:54:20.470727   22681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt with IP's: []
	I1205 19:54:20.547056   22681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt ...
	I1205 19:54:20.547083   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: {Name:mk83d65baaac221eb2fa79f43b763d1afe8e5790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.547238   22681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.key ...
	I1205 19:54:20.547251   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.key: {Name:mk49ff4a698d6c9caf8d689472d03624fa36d161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.547326   22681 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key.79850b64
	I1205 19:54:20.547344   22681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt.79850b64 with IP's: [192.168.39.244 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 19:54:20.637688   22681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt.79850b64 ...
	I1205 19:54:20.637725   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt.79850b64: {Name:mkbba10286b2d7ae79c02d113fdbeed420e7c223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.637868   22681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key.79850b64 ...
	I1205 19:54:20.637882   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key.79850b64: {Name:mk469dae240e41d1e1ea2c98015a2ebc3316a735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.637951   22681 certs.go:337] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt.79850b64 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt
	I1205 19:54:20.638013   22681 certs.go:341] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key.79850b64 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key
	I1205 19:54:20.638058   22681 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.key
	I1205 19:54:20.638071   22681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.crt with IP's: []
	I1205 19:54:20.880682   22681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.crt ...
	I1205 19:54:20.880713   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.crt: {Name:mkb8422a3e9afcc05beb29669853b9cc80269964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.880852   22681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.key ...
	I1205 19:54:20.880865   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.key: {Name:mk4e29abfb0add7d940a0fffc16dbd35053bf902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:20.880932   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 19:54:20.880953   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 19:54:20.880968   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 19:54:20.880980   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 19:54:20.880990   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 19:54:20.881006   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 19:54:20.881018   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 19:54:20.881029   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 19:54:20.881090   22681 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 19:54:20.881125   22681 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 19:54:20.881135   22681 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 19:54:20.881163   22681 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 19:54:20.881184   22681 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 19:54:20.881204   22681 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 19:54:20.881241   22681 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 19:54:20.881297   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:54:20.881317   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem -> /usr/share/ca-certificates/13410.pem
	I1205 19:54:20.881326   22681 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /usr/share/ca-certificates/134102.pem
	I1205 19:54:20.881869   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 19:54:20.907735   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 19:54:20.930367   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 19:54:20.952731   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 19:54:20.973770   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 19:54:20.995652   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 19:54:21.018320   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 19:54:21.040965   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 19:54:21.064006   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 19:54:21.088777   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 19:54:21.111226   22681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 19:54:21.133363   22681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 19:54:21.149393   22681 ssh_runner.go:195] Run: openssl version
	I1205 19:54:21.155536   22681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 19:54:21.165861   22681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 19:54:21.170163   22681 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 19:54:21.170203   22681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 19:54:21.175675   22681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 19:54:21.186014   22681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 19:54:21.197471   22681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 19:54:21.202403   22681 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 19:54:21.202454   22681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 19:54:21.208224   22681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 19:54:21.219099   22681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 19:54:21.229790   22681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:54:21.234805   22681 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:54:21.234879   22681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 19:54:21.240440   22681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 19:54:21.251482   22681 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 19:54:21.255743   22681 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 19:54:21.255798   22681 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-376951 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-376951 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:54:21.255888   22681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 19:54:21.255940   22681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 19:54:21.293384   22681 cri.go:89] found id: ""
	I1205 19:54:21.293466   22681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 19:54:21.302924   22681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 19:54:21.312171   22681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 19:54:21.321266   22681 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 19:54:21.321322   22681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1205 19:54:21.383359   22681 kubeadm.go:322] W1205 19:54:21.377520     959 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1205 19:54:21.523577   22681 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 19:54:24.248824   22681 kubeadm.go:322] W1205 19:54:24.243987     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1205 19:54:24.252110   22681 kubeadm.go:322] W1205 19:54:24.247174     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1205 19:54:33.837802   22681 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1205 19:54:33.837876   22681 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 19:54:33.837963   22681 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 19:54:33.838098   22681 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 19:54:33.838218   22681 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 19:54:33.838390   22681 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 19:54:33.838523   22681 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 19:54:33.838600   22681 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 19:54:33.838691   22681 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 19:54:33.840184   22681 out.go:204]   - Generating certificates and keys ...
	I1205 19:54:33.840263   22681 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 19:54:33.840343   22681 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 19:54:33.840438   22681 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 19:54:33.840519   22681 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 19:54:33.840613   22681 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 19:54:33.840668   22681 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 19:54:33.840713   22681 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 19:54:33.840850   22681 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-376951 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I1205 19:54:33.840936   22681 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 19:54:33.841080   22681 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-376951 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I1205 19:54:33.841174   22681 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 19:54:33.841282   22681 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 19:54:33.841335   22681 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 19:54:33.841393   22681 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 19:54:33.841455   22681 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 19:54:33.841505   22681 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 19:54:33.841556   22681 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 19:54:33.841614   22681 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 19:54:33.841669   22681 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 19:54:33.842874   22681 out.go:204]   - Booting up control plane ...
	I1205 19:54:33.842956   22681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 19:54:33.843032   22681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 19:54:33.843121   22681 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 19:54:33.843236   22681 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 19:54:33.843408   22681 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 19:54:33.843481   22681 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004010 seconds
	I1205 19:54:33.843596   22681 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 19:54:33.843741   22681 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 19:54:33.843832   22681 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 19:54:33.843981   22681 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-376951 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 19:54:33.844033   22681 kubeadm.go:322] [bootstrap-token] Using token: qbc7g1.k0m9ez41yzfoqq4t
	I1205 19:54:33.845459   22681 out.go:204]   - Configuring RBAC rules ...
	I1205 19:54:33.845558   22681 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 19:54:33.845650   22681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 19:54:33.845842   22681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 19:54:33.845989   22681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 19:54:33.846101   22681 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 19:54:33.846216   22681 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 19:54:33.846379   22681 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 19:54:33.846443   22681 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 19:54:33.846499   22681 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 19:54:33.846507   22681 kubeadm.go:322] 
	I1205 19:54:33.846566   22681 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 19:54:33.846591   22681 kubeadm.go:322] 
	I1205 19:54:33.846692   22681 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 19:54:33.846699   22681 kubeadm.go:322] 
	I1205 19:54:33.846725   22681 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 19:54:33.846807   22681 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 19:54:33.846879   22681 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 19:54:33.846892   22681 kubeadm.go:322] 
	I1205 19:54:33.846969   22681 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 19:54:33.847083   22681 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 19:54:33.847177   22681 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 19:54:33.847187   22681 kubeadm.go:322] 
	I1205 19:54:33.847292   22681 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 19:54:33.847398   22681 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 19:54:33.847413   22681 kubeadm.go:322] 
	I1205 19:54:33.847504   22681 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qbc7g1.k0m9ez41yzfoqq4t \
	I1205 19:54:33.847619   22681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 19:54:33.847662   22681 kubeadm.go:322]     --control-plane 
	I1205 19:54:33.847669   22681 kubeadm.go:322] 
	I1205 19:54:33.847755   22681 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 19:54:33.847765   22681 kubeadm.go:322] 
	I1205 19:54:33.847851   22681 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qbc7g1.k0m9ez41yzfoqq4t \
	I1205 19:54:33.847948   22681 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 19:54:33.847960   22681 cni.go:84] Creating CNI manager for ""
	I1205 19:54:33.847972   22681 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:54:33.849469   22681 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 19:54:33.850744   22681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 19:54:33.862801   22681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 19:54:33.879252   22681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 19:54:33.879347   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:33.879360   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=ingress-addon-legacy-376951 minikube.k8s.io/updated_at=2023_12_05T19_54_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:34.111443   22681 ops.go:34] apiserver oom_adj: -16
	I1205 19:54:34.111552   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:34.286706   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:34.878421   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:35.378125   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:35.878113   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:36.378405   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:36.878236   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:37.378709   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:37.878160   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:38.378589   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:38.878015   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:39.378777   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:39.878851   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:40.378253   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:40.878858   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:41.378523   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:41.878585   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:42.378575   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:42.878393   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:43.378741   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:43.878369   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:44.378112   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:44.878739   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:45.378302   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:45.878758   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:46.378225   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:46.878251   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:47.378558   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:47.878291   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:48.378463   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:48.878143   22681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 19:54:49.030302   22681 kubeadm.go:1088] duration metric: took 15.15101033s to wait for elevateKubeSystemPrivileges.
	I1205 19:54:49.030336   22681 kubeadm.go:406] StartCluster complete in 27.77454182s
	I1205 19:54:49.030351   22681 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:49.030428   22681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:54:49.031075   22681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 19:54:49.031296   22681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 19:54:49.031332   22681 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 19:54:49.031410   22681 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-376951"
	I1205 19:54:49.031436   22681 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-376951"
	I1205 19:54:49.031454   22681 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-376951"
	I1205 19:54:49.031487   22681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-376951"
	I1205 19:54:49.031494   22681 host.go:66] Checking if "ingress-addon-legacy-376951" exists ...
	I1205 19:54:49.031510   22681 config.go:182] Loaded profile config "ingress-addon-legacy-376951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1205 19:54:49.031989   22681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:54:49.032037   22681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:54:49.032031   22681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:54:49.032071   22681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:54:49.031999   22681 kapi.go:59] client config for ingress-addon-legacy-376951: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:54:49.032782   22681 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 19:54:49.047661   22681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I1205 19:54:49.047677   22681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I1205 19:54:49.048168   22681 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:54:49.048181   22681 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:54:49.048655   22681 main.go:141] libmachine: Using API Version  1
	I1205 19:54:49.048675   22681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:54:49.048654   22681 main.go:141] libmachine: Using API Version  1
	I1205 19:54:49.048705   22681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:54:49.049029   22681 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:54:49.049101   22681 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:54:49.049300   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetState
	I1205 19:54:49.049675   22681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:54:49.049726   22681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:54:49.051877   22681 kapi.go:59] client config for ingress-addon-legacy-376951: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:54:49.052254   22681 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-376951"
	I1205 19:54:49.052298   22681 host.go:66] Checking if "ingress-addon-legacy-376951" exists ...
	I1205 19:54:49.052734   22681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:54:49.052770   22681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:54:49.065662   22681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I1205 19:54:49.066149   22681 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:54:49.066684   22681 main.go:141] libmachine: Using API Version  1
	I1205 19:54:49.066709   22681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:54:49.067039   22681 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:54:49.067238   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetState
	I1205 19:54:49.068022   22681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I1205 19:54:49.068407   22681 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:54:49.068854   22681 main.go:141] libmachine: Using API Version  1
	I1205 19:54:49.068874   22681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:54:49.068992   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:49.069197   22681 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:54:49.070851   22681 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 19:54:49.069775   22681 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:54:49.072361   22681 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:54:49.072442   22681 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:54:49.072463   22681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 19:54:49.072483   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:49.075517   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:49.076020   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:49.076049   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:49.076298   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:49.076483   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:49.076734   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:49.076888   22681 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa Username:docker}
	I1205 19:54:49.088105   22681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1205 19:54:49.088507   22681 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:54:49.089011   22681 main.go:141] libmachine: Using API Version  1
	I1205 19:54:49.089041   22681 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:54:49.089346   22681 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:54:49.089565   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetState
	I1205 19:54:49.091849   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .DriverName
	I1205 19:54:49.092104   22681 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 19:54:49.092119   22681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 19:54:49.092133   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHHostname
	I1205 19:54:49.095262   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:49.095617   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:0e:cc", ip: ""} in network mk-ingress-addon-legacy-376951: {Iface:virbr1 ExpiryTime:2023-12-05 20:54:00 +0000 UTC Type:0 Mac:52:54:00:e0:0e:cc Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ingress-addon-legacy-376951 Clientid:01:52:54:00:e0:0e:cc}
	I1205 19:54:49.095644   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | domain ingress-addon-legacy-376951 has defined IP address 192.168.39.244 and MAC address 52:54:00:e0:0e:cc in network mk-ingress-addon-legacy-376951
	I1205 19:54:49.095823   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHPort
	I1205 19:54:49.096013   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHKeyPath
	I1205 19:54:49.096162   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .GetSSHUsername
	I1205 19:54:49.096309   22681 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/ingress-addon-legacy-376951/id_rsa Username:docker}
	W1205 19:54:49.099543   22681 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-376951" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1205 19:54:49.099572   22681 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1205 19:54:49.099595   22681 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 19:54:49.101973   22681 out.go:177] * Verifying Kubernetes components...
	I1205 19:54:49.103302   22681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:54:49.209021   22681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 19:54:49.209580   22681 kapi.go:59] client config for ingress-addon-legacy-376951: &rest.Config{Host:"https://192.168.39.244:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 19:54:49.209869   22681 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-376951" to be "Ready" ...
	I1205 19:54:49.215236   22681 node_ready.go:49] node "ingress-addon-legacy-376951" has status "Ready":"True"
	I1205 19:54:49.215261   22681 node_ready.go:38] duration metric: took 5.372479ms waiting for node "ingress-addon-legacy-376951" to be "Ready" ...
	I1205 19:54:49.215276   22681 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:54:49.221326   22681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace to be "Ready" ...
	I1205 19:54:49.239703   22681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 19:54:49.255931   22681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 19:54:50.247493   22681 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.038433157s)
	I1205 19:54:50.247529   22681 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 19:54:50.254918   22681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.015173092s)
	I1205 19:54:50.254983   22681 main.go:141] libmachine: Making call to close driver server
	I1205 19:54:50.254997   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .Close
	I1205 19:54:50.255305   22681 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:54:50.255309   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Closing plugin on server side
	I1205 19:54:50.255325   22681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:54:50.255344   22681 main.go:141] libmachine: Making call to close driver server
	I1205 19:54:50.255358   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .Close
	I1205 19:54:50.255592   22681 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:54:50.255609   22681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:54:50.255617   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Closing plugin on server side
	I1205 19:54:50.288475   22681 main.go:141] libmachine: Making call to close driver server
	I1205 19:54:50.288504   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .Close
	I1205 19:54:50.288796   22681 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:54:50.288813   22681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:54:50.348241   22681 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.092260992s)
	I1205 19:54:50.348283   22681 main.go:141] libmachine: Making call to close driver server
	I1205 19:54:50.348292   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .Close
	I1205 19:54:50.348575   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Closing plugin on server side
	I1205 19:54:50.348612   22681 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:54:50.348630   22681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:54:50.348648   22681 main.go:141] libmachine: Making call to close driver server
	I1205 19:54:50.348661   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) Calling .Close
	I1205 19:54:50.348849   22681 main.go:141] libmachine: Successfully made call to close driver server
	I1205 19:54:50.348879   22681 main.go:141] libmachine: (ingress-addon-legacy-376951) DBG | Closing plugin on server side
	I1205 19:54:50.348863   22681 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 19:54:50.350809   22681 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 19:54:50.352406   22681 addons.go:502] enable addons completed in 1.321085101s: enabled=[default-storageclass storage-provisioner]
	I1205 19:54:51.274873   22681 pod_ready.go:102] pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace has status "Ready":"False"
	I1205 19:54:53.735071   22681 pod_ready.go:102] pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace has status "Ready":"False"
	I1205 19:54:55.735656   22681 pod_ready.go:102] pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace has status "Ready":"False"
	I1205 19:54:58.236259   22681 pod_ready.go:102] pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace has status "Ready":"False"
	I1205 19:55:00.736279   22681 pod_ready.go:92] pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace has status "Ready":"True"
	I1205 19:55:00.736304   22681 pod_ready.go:81] duration metric: took 11.514951832s waiting for pod "coredns-66bff467f8-r2cgq" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:00.736315   22681 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xmzk9" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.253631   22681 pod_ready.go:92] pod "coredns-66bff467f8-xmzk9" in "kube-system" namespace has status "Ready":"True"
	I1205 19:55:01.253656   22681 pod_ready.go:81] duration metric: took 517.33284ms waiting for pod "coredns-66bff467f8-xmzk9" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.253668   22681 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.259564   22681 pod_ready.go:92] pod "etcd-ingress-addon-legacy-376951" in "kube-system" namespace has status "Ready":"True"
	I1205 19:55:01.259583   22681 pod_ready.go:81] duration metric: took 5.906913ms waiting for pod "etcd-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.259591   22681 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.264368   22681 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-376951" in "kube-system" namespace has status "Ready":"True"
	I1205 19:55:01.264395   22681 pod_ready.go:81] duration metric: took 4.792531ms waiting for pod "kube-apiserver-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.264406   22681 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.329732   22681 request.go:629] Waited for 65.240349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ingress-addon-legacy-376951
	I1205 19:55:01.529956   22681 request.go:629] Waited for 195.383817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ingress-addon-legacy-376951
	I1205 19:55:01.534485   22681 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-376951" in "kube-system" namespace has status "Ready":"True"
	I1205 19:55:01.534505   22681 pod_ready.go:81] duration metric: took 270.092365ms waiting for pod "kube-controller-manager-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.534514   22681 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.729958   22681 request.go:629] Waited for 195.371063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-376951
	I1205 19:55:01.930031   22681 request.go:629] Waited for 196.39108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ingress-addon-legacy-376951
	I1205 19:55:01.934310   22681 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-376951" in "kube-system" namespace has status "Ready":"True"
	I1205 19:55:01.934337   22681 pod_ready.go:81] duration metric: took 399.815332ms waiting for pod "kube-scheduler-ingress-addon-legacy-376951" in "kube-system" namespace to be "Ready" ...
	I1205 19:55:01.934348   22681 pod_ready.go:38] duration metric: took 12.719056693s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 19:55:01.934368   22681 api_server.go:52] waiting for apiserver process to appear ...
	I1205 19:55:01.934425   22681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 19:55:01.948210   22681 api_server.go:72] duration metric: took 12.848569766s to wait for apiserver process to appear ...
	I1205 19:55:01.948237   22681 api_server.go:88] waiting for apiserver healthz status ...
	I1205 19:55:01.948254   22681 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I1205 19:55:01.954726   22681 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I1205 19:55:01.955778   22681 api_server.go:141] control plane version: v1.18.20
	I1205 19:55:01.955800   22681 api_server.go:131] duration metric: took 7.557569ms to wait for apiserver health ...
	I1205 19:55:01.955808   22681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 19:55:02.130181   22681 request.go:629] Waited for 174.322305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I1205 19:55:02.136517   22681 system_pods.go:59] 8 kube-system pods found
	I1205 19:55:02.136541   22681 system_pods.go:61] "coredns-66bff467f8-r2cgq" [157ec9f4-2dbf-48ec-b833-c3687ed5cef2] Running
	I1205 19:55:02.136546   22681 system_pods.go:61] "coredns-66bff467f8-xmzk9" [21693631-a694-480e-978a-76aadb6e82b0] Running
	I1205 19:55:02.136549   22681 system_pods.go:61] "etcd-ingress-addon-legacy-376951" [9a2743e3-ba7e-420d-a0d5-ae6e6c716212] Running
	I1205 19:55:02.136554   22681 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-376951" [5890c1bb-302b-46a2-816b-d6beb94846ba] Running
	I1205 19:55:02.136558   22681 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-376951" [559e3b54-7f75-493d-af7b-076ca473dcd6] Running
	I1205 19:55:02.136562   22681 system_pods.go:61] "kube-proxy-gljfr" [c14f218e-3820-443d-b141-9afcf4f23751] Running
	I1205 19:55:02.136567   22681 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-376951" [fc00c832-70da-4159-8f46-28ef7245dca6] Running
	I1205 19:55:02.136573   22681 system_pods.go:61] "storage-provisioner" [8d67c195-70ed-4390-9098-91046a17afc8] Running
	I1205 19:55:02.136578   22681 system_pods.go:74] duration metric: took 180.766078ms to wait for pod list to return data ...
	I1205 19:55:02.136587   22681 default_sa.go:34] waiting for default service account to be created ...
	I1205 19:55:02.330011   22681 request.go:629] Waited for 193.347072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I1205 19:55:02.333111   22681 default_sa.go:45] found service account: "default"
	I1205 19:55:02.333143   22681 default_sa.go:55] duration metric: took 196.550108ms for default service account to be created ...
	I1205 19:55:02.333155   22681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 19:55:02.529533   22681 request.go:629] Waited for 196.308367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I1205 19:55:02.536033   22681 system_pods.go:86] 8 kube-system pods found
	I1205 19:55:02.536060   22681 system_pods.go:89] "coredns-66bff467f8-r2cgq" [157ec9f4-2dbf-48ec-b833-c3687ed5cef2] Running
	I1205 19:55:02.536065   22681 system_pods.go:89] "coredns-66bff467f8-xmzk9" [21693631-a694-480e-978a-76aadb6e82b0] Running
	I1205 19:55:02.536069   22681 system_pods.go:89] "etcd-ingress-addon-legacy-376951" [9a2743e3-ba7e-420d-a0d5-ae6e6c716212] Running
	I1205 19:55:02.536073   22681 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-376951" [5890c1bb-302b-46a2-816b-d6beb94846ba] Running
	I1205 19:55:02.536080   22681 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-376951" [559e3b54-7f75-493d-af7b-076ca473dcd6] Running
	I1205 19:55:02.536084   22681 system_pods.go:89] "kube-proxy-gljfr" [c14f218e-3820-443d-b141-9afcf4f23751] Running
	I1205 19:55:02.536087   22681 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-376951" [fc00c832-70da-4159-8f46-28ef7245dca6] Running
	I1205 19:55:02.536091   22681 system_pods.go:89] "storage-provisioner" [8d67c195-70ed-4390-9098-91046a17afc8] Running
	I1205 19:55:02.536096   22681 system_pods.go:126] duration metric: took 202.935635ms to wait for k8s-apps to be running ...
	I1205 19:55:02.536103   22681 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 19:55:02.536141   22681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 19:55:02.549770   22681 system_svc.go:56] duration metric: took 13.656681ms WaitForService to wait for kubelet.
	I1205 19:55:02.549796   22681 kubeadm.go:581] duration metric: took 13.450161874s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 19:55:02.549812   22681 node_conditions.go:102] verifying NodePressure condition ...
	I1205 19:55:02.730255   22681 request.go:629] Waited for 180.382327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes
	I1205 19:55:02.734439   22681 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 19:55:02.734467   22681 node_conditions.go:123] node cpu capacity is 2
	I1205 19:55:02.734479   22681 node_conditions.go:105] duration metric: took 184.661743ms to run NodePressure ...
	I1205 19:55:02.734490   22681 start.go:228] waiting for startup goroutines ...
	I1205 19:55:02.734498   22681 start.go:233] waiting for cluster config update ...
	I1205 19:55:02.734510   22681 start.go:242] writing updated cluster config ...
	I1205 19:55:02.734775   22681 ssh_runner.go:195] Run: rm -f paused
	I1205 19:55:02.779073   22681 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1205 19:55:02.781176   22681 out.go:177] 
	W1205 19:55:02.782671   22681 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1205 19:55:02.784067   22681 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1205 19:55:02.785668   22681 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-376951" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 19:53:56 UTC, ends at Tue 2023-12-05 19:58:02 UTC. --
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.452476678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806282452462378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=0a272c34-28a8-4cda-b1a9-d384f96e3a44 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.453244073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=907a32a5-3ecc-4aee-a138-60270e02077d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.453295202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=907a32a5-3ecc-4aee-a138-60270e02077d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.453578030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5faa8e526c89c506f515f733333010fef80a2b262d724dd7efede99a03e71e4b,PodSandboxId:4b0deb68a8b2eacbf6726710a3619206588ae2b98596335e5348b0a123ba8744,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701806270144831429,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kkbfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50e27d6d-cc72-4a92-8925-4789a7bb5406,},Annotations:map[string]string{io.kubernetes.container.hash: edbade33,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f7fd530de0e00b09bdd60d57f48a7c0990414a5eac8743d7e2907fe64eced3,PodSandboxId:a8aa349bffb064fbee3c6d5a4d9f513994225d1bd9e2c57e89839276ddef436c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701806130184214208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7831e5-e75f-490e-af6e-9525192ab3a6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c749be87,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4851f070c99c73e1881ab1397d1b65ebb54d980d7680ddebc94dbfbe4e8c0331,PodSandboxId:c61b66388a865437174e5513f8fa48b82923de48c1e4b8acb79e509b1d242206,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701806115091869839,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w94vt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d,},Annotations:map[string]string{io.kubernetes.container.hash: a27cfdd4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a0134d71a1fa9c9140aa8ecba528e94ec66137d987a68afd6739d71a0ea3ea49,PodSandboxId:1c290aa72f764ea42c0b150efc90126342dd158939936d559c333d4ad9fa0e52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105863261418,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-thjlw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28639f85-2f7f-4722-829e-3f5602f33d34,},Annotations:map[string]string{io.kubernetes.container.hash: c0c0569a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2783c6fb5306d72178a5ad5f3a5de70423f767e73d704f1f1d2145b320e0982,PodSandboxId:36b7106f712e57077b563e69617b08514337033e6b382052e787fd097d410135,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105691893676,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sm5lz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aecbd69c-8a5c-4d52-b832-5fb122521e05,},Annotations:map[string]string{io.kubernetes.container.hash: b462037a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6fdf70b8f0135a5c04c3c70a0b020d83687a2027fbe9d1f360558e318379ff0,PodSandboxId:adfff7b2a385b9630d773f8b4a034f71959b75109581524912efdff769f6e8b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091604971249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-r2cgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157ec9f4-2dbf-48ec-b833-c3687ed5cef2,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b262cb5398138338958b5bd3c90
221cbe090bfb5688c3e9e477e59f3c8837b2a,PodSandboxId:4d63f85e618c1a3fa08e6f6d91d752c77f716ebc0b90d6a56cc951aaf2681b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806091532154726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d67c195-70ed-4390-9098-91046a17afc8,},Annotations:map[string]string{io.kubernetes.container.hash: 227c833b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb9e8127423c4f274bf1e8db2d
7980b59ec524d334db0027e52c173dd143b4,PodSandboxId:40337c85ab9e39de4225a9e3552bcd776ddcb33fb48430004a424a8afdcc36ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091213897006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xmzk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21693631-a694-480e-978a-76aadb6e82b0,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3af8400346c8da3d1dfb4c4482d32547d453e81ed6aaee01dddcb7aa53adf0,PodSandboxId:d2db4e57c8096e4294498bd0344821f9e7f90486a329194c9c677d485c2647f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701806090649093121,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gljfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c14f218e-3820-443d-b141-9afcf4f23751,},Annotations:map[string]string{io.kubernetes.container.hash: 82add0c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d9472cd5cc5a06122fbf6f054329af31961feeceb6fe13d8d85e11cfe7144b1,PodSandboxId:b5804ecff20c4686bc1dab810c40a5eaf8a0c62a39ddd98a151d1f02caadb7cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701806067008577087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b970a9a9e5d9a69969e374dd9278f0a,},Annotations:map[string]string{io.kubernetes.container.hash: 32d33aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50f91585b1d561ccacff96ada98c0a8292646a383b784448a8217d40cd75ec,PodSandboxId:71010b6c7309a87a55992ca4b847d834f5836d2058c8a313d72605a695551fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701806066152804831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf04ca7f900b89e591e5966711eeed06c85245bb21db51bfd5e902edfdebfb19,PodSandboxId:59d54eb3b86508874ad84ce72cea755e604078ba04d6ef87d10c0647fe5cf8f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701806065735813946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2e18bfbb70915b750e66d6019e4eba92f36ab20385bda43aee71194f980ff,PodSandboxId:69791ed9883b8999bf5c4e184c770e5d56330e7dcd0f5d5eae4c89bdaf334605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701806065792621297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=907a32a5-3ecc-4aee-a138-60270e02077d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.492342192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ddb2461f-c6e5-4ffa-be10-293b02b007ef name=/runtime.v1.RuntimeService/Version
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.492428151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ddb2461f-c6e5-4ffa-be10-293b02b007ef name=/runtime.v1.RuntimeService/Version
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.494374221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f99d5380-5778-408f-bab0-df87b335d7d0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.495279235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806282495264879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=f99d5380-5778-408f-bab0-df87b335d7d0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.496202637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=23ee304a-bd7c-4cc4-be79-66d6800bdb50 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.496276326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=23ee304a-bd7c-4cc4-be79-66d6800bdb50 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.496549015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5faa8e526c89c506f515f733333010fef80a2b262d724dd7efede99a03e71e4b,PodSandboxId:4b0deb68a8b2eacbf6726710a3619206588ae2b98596335e5348b0a123ba8744,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701806270144831429,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kkbfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50e27d6d-cc72-4a92-8925-4789a7bb5406,},Annotations:map[string]string{io.kubernetes.container.hash: edbade33,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f7fd530de0e00b09bdd60d57f48a7c0990414a5eac8743d7e2907fe64eced3,PodSandboxId:a8aa349bffb064fbee3c6d5a4d9f513994225d1bd9e2c57e89839276ddef436c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701806130184214208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7831e5-e75f-490e-af6e-9525192ab3a6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c749be87,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4851f070c99c73e1881ab1397d1b65ebb54d980d7680ddebc94dbfbe4e8c0331,PodSandboxId:c61b66388a865437174e5513f8fa48b82923de48c1e4b8acb79e509b1d242206,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701806115091869839,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w94vt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d,},Annotations:map[string]string{io.kubernetes.container.hash: a27cfdd4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a0134d71a1fa9c9140aa8ecba528e94ec66137d987a68afd6739d71a0ea3ea49,PodSandboxId:1c290aa72f764ea42c0b150efc90126342dd158939936d559c333d4ad9fa0e52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105863261418,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-thjlw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28639f85-2f7f-4722-829e-3f5602f33d34,},Annotations:map[string]string{io.kubernetes.container.hash: c0c0569a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2783c6fb5306d72178a5ad5f3a5de70423f767e73d704f1f1d2145b320e0982,PodSandboxId:36b7106f712e57077b563e69617b08514337033e6b382052e787fd097d410135,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105691893676,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sm5lz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aecbd69c-8a5c-4d52-b832-5fb122521e05,},Annotations:map[string]string{io.kubernetes.container.hash: b462037a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6fdf70b8f0135a5c04c3c70a0b020d83687a2027fbe9d1f360558e318379ff0,PodSandboxId:adfff7b2a385b9630d773f8b4a034f71959b75109581524912efdff769f6e8b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091604971249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-r2cgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157ec9f4-2dbf-48ec-b833-c3687ed5cef2,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b262cb5398138338958b5bd3c90
221cbe090bfb5688c3e9e477e59f3c8837b2a,PodSandboxId:4d63f85e618c1a3fa08e6f6d91d752c77f716ebc0b90d6a56cc951aaf2681b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806091532154726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d67c195-70ed-4390-9098-91046a17afc8,},Annotations:map[string]string{io.kubernetes.container.hash: 227c833b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb9e8127423c4f274bf1e8db2d
7980b59ec524d334db0027e52c173dd143b4,PodSandboxId:40337c85ab9e39de4225a9e3552bcd776ddcb33fb48430004a424a8afdcc36ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091213897006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xmzk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21693631-a694-480e-978a-76aadb6e82b0,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3af8400346c8da3d1dfb4c4482d32547d453e81ed6aaee01dddcb7aa53adf0,PodSandboxId:d2db4e57c8096e4294498bd0344821f9e7f90486a329194c9c677d485c2647f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701806090649093121,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gljfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c14f218e-3820-443d-b141-9afcf4f23751,},Annotations:map[string]string{io.kubernetes.container.hash: 82add0c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d9472cd5cc5a06122fbf6f054329af31961feeceb6fe13d8d85e11cfe7144b1,PodSandboxId:b5804ecff20c4686bc1dab810c40a5eaf8a0c62a39ddd98a151d1f02caadb7cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701806067008577087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b970a9a9e5d9a69969e374dd9278f0a,},Annotations:map[string]string{io.kubernetes.container.hash: 32d33aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50f91585b1d561ccacff96ada98c0a8292646a383b784448a8217d40cd75ec,PodSandboxId:71010b6c7309a87a55992ca4b847d834f5836d2058c8a313d72605a695551fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701806066152804831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf04ca7f900b89e591e5966711eeed06c85245bb21db51bfd5e902edfdebfb19,PodSandboxId:59d54eb3b86508874ad84ce72cea755e604078ba04d6ef87d10c0647fe5cf8f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701806065735813946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2e18bfbb70915b750e66d6019e4eba92f36ab20385bda43aee71194f980ff,PodSandboxId:69791ed9883b8999bf5c4e184c770e5d56330e7dcd0f5d5eae4c89bdaf334605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701806065792621297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=23ee304a-bd7c-4cc4-be79-66d6800bdb50 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.543535924Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51d28e4e-08d2-495f-aad4-22f702f009c4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.543592310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51d28e4e-08d2-495f-aad4-22f702f009c4 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.544562805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=926a887f-d6ac-4a09-b30b-9973d72109f4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.545207923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806282545190813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=926a887f-d6ac-4a09-b30b-9973d72109f4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.545922498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de9addfa-4277-423e-9832-cd7edb0b4b1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.545968904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de9addfa-4277-423e-9832-cd7edb0b4b1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.546244482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5faa8e526c89c506f515f733333010fef80a2b262d724dd7efede99a03e71e4b,PodSandboxId:4b0deb68a8b2eacbf6726710a3619206588ae2b98596335e5348b0a123ba8744,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701806270144831429,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kkbfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50e27d6d-cc72-4a92-8925-4789a7bb5406,},Annotations:map[string]string{io.kubernetes.container.hash: edbade33,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f7fd530de0e00b09bdd60d57f48a7c0990414a5eac8743d7e2907fe64eced3,PodSandboxId:a8aa349bffb064fbee3c6d5a4d9f513994225d1bd9e2c57e89839276ddef436c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701806130184214208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7831e5-e75f-490e-af6e-9525192ab3a6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c749be87,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4851f070c99c73e1881ab1397d1b65ebb54d980d7680ddebc94dbfbe4e8c0331,PodSandboxId:c61b66388a865437174e5513f8fa48b82923de48c1e4b8acb79e509b1d242206,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701806115091869839,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w94vt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d,},Annotations:map[string]string{io.kubernetes.container.hash: a27cfdd4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a0134d71a1fa9c9140aa8ecba528e94ec66137d987a68afd6739d71a0ea3ea49,PodSandboxId:1c290aa72f764ea42c0b150efc90126342dd158939936d559c333d4ad9fa0e52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105863261418,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-thjlw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28639f85-2f7f-4722-829e-3f5602f33d34,},Annotations:map[string]string{io.kubernetes.container.hash: c0c0569a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2783c6fb5306d72178a5ad5f3a5de70423f767e73d704f1f1d2145b320e0982,PodSandboxId:36b7106f712e57077b563e69617b08514337033e6b382052e787fd097d410135,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105691893676,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sm5lz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aecbd69c-8a5c-4d52-b832-5fb122521e05,},Annotations:map[string]string{io.kubernetes.container.hash: b462037a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6fdf70b8f0135a5c04c3c70a0b020d83687a2027fbe9d1f360558e318379ff0,PodSandboxId:adfff7b2a385b9630d773f8b4a034f71959b75109581524912efdff769f6e8b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091604971249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-r2cgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157ec9f4-2dbf-48ec-b833-c3687ed5cef2,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b262cb5398138338958b5bd3c90
221cbe090bfb5688c3e9e477e59f3c8837b2a,PodSandboxId:4d63f85e618c1a3fa08e6f6d91d752c77f716ebc0b90d6a56cc951aaf2681b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806091532154726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d67c195-70ed-4390-9098-91046a17afc8,},Annotations:map[string]string{io.kubernetes.container.hash: 227c833b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb9e8127423c4f274bf1e8db2d
7980b59ec524d334db0027e52c173dd143b4,PodSandboxId:40337c85ab9e39de4225a9e3552bcd776ddcb33fb48430004a424a8afdcc36ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091213897006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xmzk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21693631-a694-480e-978a-76aadb6e82b0,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3af8400346c8da3d1dfb4c4482d32547d453e81ed6aaee01dddcb7aa53adf0,PodSandboxId:d2db4e57c8096e4294498bd0344821f9e7f90486a329194c9c677d485c2647f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701806090649093121,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gljfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c14f218e-3820-443d-b141-9afcf4f23751,},Annotations:map[string]string{io.kubernetes.container.hash: 82add0c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d9472cd5cc5a06122fbf6f054329af31961feeceb6fe13d8d85e11cfe7144b1,PodSandboxId:b5804ecff20c4686bc1dab810c40a5eaf8a0c62a39ddd98a151d1f02caadb7cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701806067008577087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b970a9a9e5d9a69969e374dd9278f0a,},Annotations:map[string]string{io.kubernetes.container.hash: 32d33aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50f91585b1d561ccacff96ada98c0a8292646a383b784448a8217d40cd75ec,PodSandboxId:71010b6c7309a87a55992ca4b847d834f5836d2058c8a313d72605a695551fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701806066152804831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf04ca7f900b89e591e5966711eeed06c85245bb21db51bfd5e902edfdebfb19,PodSandboxId:59d54eb3b86508874ad84ce72cea755e604078ba04d6ef87d10c0647fe5cf8f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701806065735813946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2e18bfbb70915b750e66d6019e4eba92f36ab20385bda43aee71194f980ff,PodSandboxId:69791ed9883b8999bf5c4e184c770e5d56330e7dcd0f5d5eae4c89bdaf334605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701806065792621297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de9addfa-4277-423e-9832-cd7edb0b4b1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.579089325Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c8fdd184-6d28-427b-b74d-33a0a8cac922 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.579143861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c8fdd184-6d28-427b-b74d-33a0a8cac922 name=/runtime.v1.RuntimeService/Version
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.580116295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9111e563-c914-4d64-9510-ab65b61b890d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.580603212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806282580589605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=9111e563-c914-4d64-9510-ab65b61b890d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.581279558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4dc7418e-404e-43f0-9493-f163cf83084c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.581353492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4dc7418e-404e-43f0-9493-f163cf83084c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 19:58:02 ingress-addon-legacy-376951 crio[718]: time="2023-12-05 19:58:02.581625664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5faa8e526c89c506f515f733333010fef80a2b262d724dd7efede99a03e71e4b,PodSandboxId:4b0deb68a8b2eacbf6726710a3619206588ae2b98596335e5348b0a123ba8744,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701806270144831429,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-kkbfp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 50e27d6d-cc72-4a92-8925-4789a7bb5406,},Annotations:map[string]string{io.kubernetes.container.hash: edbade33,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68f7fd530de0e00b09bdd60d57f48a7c0990414a5eac8743d7e2907fe64eced3,PodSandboxId:a8aa349bffb064fbee3c6d5a4d9f513994225d1bd9e2c57e89839276ddef436c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc,State:CONTAINER_RUNNING,CreatedAt:1701806130184214208,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7831e5-e75f-490e-af6e-9525192ab3a6,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: c749be87,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4851f070c99c73e1881ab1397d1b65ebb54d980d7680ddebc94dbfbe4e8c0331,PodSandboxId:c61b66388a865437174e5513f8fa48b82923de48c1e4b8acb79e509b1d242206,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701806115091869839,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-w94vt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d,},Annotations:map[string]string{io.kubernetes.container.hash: a27cfdd4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a0134d71a1fa9c9140aa8ecba528e94ec66137d987a68afd6739d71a0ea3ea49,PodSandboxId:1c290aa72f764ea42c0b150efc90126342dd158939936d559c333d4ad9fa0e52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105863261418,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-thjlw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28639f85-2f7f-4722-829e-3f5602f33d34,},Annotations:map[string]string{io.kubernetes.container.hash: c0c0569a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2783c6fb5306d72178a5ad5f3a5de70423f767e73d704f1f1d2145b320e0982,PodSandboxId:36b7106f712e57077b563e69617b08514337033e6b382052e787fd097d410135,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701806105691893676,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sm5lz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: aecbd69c-8a5c-4d52-b832-5fb122521e05,},Annotations:map[string]string{io.kubernetes.container.hash: b462037a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6fdf70b8f0135a5c04c3c70a0b020d83687a2027fbe9d1f360558e318379ff0,PodSandboxId:adfff7b2a385b9630d773f8b4a034f71959b75109581524912efdff769f6e8b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091604971249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-r2cgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157ec9f4-2dbf-48ec-b833-c3687ed5cef2,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b262cb5398138338958b5bd3c90
221cbe090bfb5688c3e9e477e59f3c8837b2a,PodSandboxId:4d63f85e618c1a3fa08e6f6d91d752c77f716ebc0b90d6a56cc951aaf2681b63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806091532154726,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d67c195-70ed-4390-9098-91046a17afc8,},Annotations:map[string]string{io.kubernetes.container.hash: 227c833b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fabb9e8127423c4f274bf1e8db2d
7980b59ec524d334db0027e52c173dd143b4,PodSandboxId:40337c85ab9e39de4225a9e3552bcd776ddcb33fb48430004a424a8afdcc36ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701806091213897006,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-xmzk9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21693631-a694-480e-978a-76aadb6e82b0,},Annotations:map[string]string{io.kubernetes.container.hash: de1bf142,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a3af8400346c8da3d1dfb4c4482d32547d453e81ed6aaee01dddcb7aa53adf0,PodSandboxId:d2db4e57c8096e4294498bd0344821f9e7f90486a329194c9c677d485c2647f7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701806090649093121,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gljfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c14f218e-3820-443d-b141-9afcf4f23751,},Annotations:map[string]string{io.kubernetes.container.hash: 82add0c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d9472cd5cc5a06122fbf6f054329af31961feeceb6fe13d8d85e11cfe7144b1,PodSandboxId:b5804ecff20c4686bc1dab810c40a5eaf8a0c62a39ddd98a151d1f02caadb7cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701806067008577087,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b970a9a9e5d9a69969e374dd9278f0a,},Annotations:map[string]string{io.kubernetes.container.hash: 32d33aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c50f91585b1d561ccacff96ada98c0a8292646a383b784448a8217d40cd75ec,PodSandboxId:71010b6c7309a87a55992ca4b847d834f5836d2058c8a313d72605a695551fa2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701806066152804831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf04ca7f900b89e591e5966711eeed06c85245bb21db51bfd5e902edfdebfb19,PodSandboxId:59d54eb3b86508874ad84ce72cea755e604078ba04d6ef87d10c0647fe5cf8f5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701806065735813946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2e18bfbb70915b750e66d6019e4eba92f36ab20385bda43aee71194f980ff,PodSandboxId:69791ed9883b8999bf5c4e184c770e5d56330e7dcd0f5d5eae4c89bdaf334605,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701806065792621297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-376951,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53abda991029f9e054ae37d0cc603b56,},Annotations:map[string]string{io.kubernetes.container.hash: 743d4145,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4dc7418e-404e-43f0-9493-f163cf83084c name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5faa8e526c89c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            12 seconds ago      Running             hello-world-app           0                   4b0deb68a8b2e       hello-world-app-5f5d8b66bb-kkbfp
	68f7fd530de0e       docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                    2 minutes ago       Running             nginx                     0                   a8aa349bffb06       nginx
	4851f070c99c7       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   c61b66388a865       ingress-nginx-controller-7fcf777cb7-w94vt
	a0134d71a1fa9       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   1c290aa72f764       ingress-nginx-admission-patch-thjlw
	a2783c6fb5306       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   36b7106f712e5       ingress-nginx-admission-create-sm5lz
	f6fdf70b8f013       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   adfff7b2a385b       coredns-66bff467f8-r2cgq
	b262cb5398138       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   4d63f85e618c1       storage-provisioner
	fabb9e8127423       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   40337c85ab9e3       coredns-66bff467f8-xmzk9
	6a3af8400346c       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   d2db4e57c8096       kube-proxy-gljfr
	6d9472cd5cc5a       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   b5804ecff20c4       etcd-ingress-addon-legacy-376951
	5c50f91585b1d       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   71010b6c7309a       kube-controller-manager-ingress-addon-legacy-376951
	4ac2e18bfbb70       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   69791ed9883b8       kube-apiserver-ingress-addon-legacy-376951
	cf04ca7f900b8       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   59d54eb3b8650       kube-scheduler-ingress-addon-legacy-376951
	
	* 
	* ==> coredns [f6fdf70b8f0135a5c04c3c70a0b020d83687a2027fbe9d1f360558e318379ff0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:40959 - 1404 "HINFO IN 6748704652402505820.4717820472290438019. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008700492s
	[INFO] 10.244.0.6:35484 - 21567 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000369745s
	[INFO] 10.244.0.6:35484 - 11042 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000226921s
	[INFO] 10.244.0.6:35484 - 51882 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094433s
	[INFO] 10.244.0.6:35484 - 30334 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113487s
	[INFO] 10.244.0.6:35484 - 27779 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101607s
	[INFO] 10.244.0.6:35484 - 54535 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077387s
	[INFO] 10.244.0.6:35484 - 58728 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000157285s
	[INFO] 10.244.0.6:59702 - 42646 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099646s
	[INFO] 10.244.0.6:59702 - 28926 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054004s
	[INFO] 10.244.0.6:59702 - 65262 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065494s
	[INFO] 10.244.0.6:59702 - 64280 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000122892s
	[INFO] 10.244.0.6:59702 - 30345 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031569s
	[INFO] 10.244.0.6:59702 - 17934 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061159s
	[INFO] 10.244.0.6:59702 - 55718 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041414s
	
	* 
	* ==> coredns [fabb9e8127423c4f274bf1e8db2d7980b59ec524d334db0027e52c173dd143b4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 6dca4351036a5cca7eefa7c93a3dea30
	CoreDNS-1.6.7
	linux/amd64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:56374 - 4312 "HINFO IN 8635410737193401588.8799357058821868188. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012121632s
	[INFO] 10.244.0.6:40194 - 49662 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000307785s
	[INFO] 10.244.0.6:40194 - 48797 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111402s
	[INFO] 10.244.0.6:40194 - 25800 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000133928s
	[INFO] 10.244.0.6:40194 - 45145 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000107744s
	[INFO] 10.244.0.6:40194 - 7622 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106209s
	[INFO] 10.244.0.6:40194 - 38148 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000208106s
	[INFO] 10.244.0.6:40194 - 16631 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000121065s
	[INFO] 10.244.0.6:44613 - 55668 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084407s
	[INFO] 10.244.0.6:44613 - 31297 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070916s
	[INFO] 10.244.0.6:44613 - 57357 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059201s
	[INFO] 10.244.0.6:44613 - 31659 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000112533s
	[INFO] 10.244.0.6:44613 - 47733 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062081s
	[INFO] 10.244.0.6:44613 - 46134 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118724s
	[INFO] 10.244.0.6:44613 - 41253 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064762s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-376951
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-376951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=ingress-addon-legacy-376951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T19_54_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 19:54:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-376951
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 19:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 19:55:34 +0000   Tue, 05 Dec 2023 19:54:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 19:55:34 +0000   Tue, 05 Dec 2023 19:54:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 19:55:34 +0000   Tue, 05 Dec 2023 19:54:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 19:55:34 +0000   Tue, 05 Dec 2023 19:54:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ingress-addon-legacy-376951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 75e5ccc0356747ba9ea67e5577af426a
	  System UUID:                75e5ccc0-3567-47ba-9ea6-7e5577af426a
	  Boot ID:                    7d3cbca6-4179-435d-b1e3-a1c3457d2b60
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-kkbfp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 coredns-66bff467f8-r2cgq                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m14s
	  kube-system                 coredns-66bff467f8-xmzk9                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m14s
	  kube-system                 etcd-ingress-addon-legacy-376951                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-376951             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-376951    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-proxy-gljfr                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-scheduler-ingress-addon-legacy-376951             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m38s (x5 over 3m38s)  kubelet     Node ingress-addon-legacy-376951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s (x5 over 3m38s)  kubelet     Node ingress-addon-legacy-376951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s (x4 over 3m38s)  kubelet     Node ingress-addon-legacy-376951 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m28s                  kubelet     Node ingress-addon-legacy-376951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s                  kubelet     Node ingress-addon-legacy-376951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s                  kubelet     Node ingress-addon-legacy-376951 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m18s                  kubelet     Node ingress-addon-legacy-376951 status is now: NodeReady
	  Normal  Starting                 3m12s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 5 19:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092778] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.443818] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.459124] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150461] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Dec 5 19:54] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.942956] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.111938] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.160512] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.118513] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.230946] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +7.850108] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +2.632721] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +8.982457] systemd-fstab-generator[1422]: Ignoring "noauto" for root device
	[ +17.594854] kauditd_printk_skb: 6 callbacks suppressed
	[Dec 5 19:55] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.743737] kauditd_printk_skb: 8 callbacks suppressed
	[ +20.806832] kauditd_printk_skb: 7 callbacks suppressed
	[Dec 5 19:57] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.557002] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [6d9472cd5cc5a06122fbf6f054329af31961feeceb6fe13d8d85e11cfe7144b1] <==
	* 2023-12-05 19:54:27.140088 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/12/05 19:54:27 INFO: 38b93d7e943acb5d switched to configuration voters=(4087365750677490525)
	2023-12-05 19:54:27.140586 I | etcdserver/membership: added member 38b93d7e943acb5d [https://192.168.39.244:2380] to cluster ae521d247b31ac74
	2023-12-05 19:54:27.140668 I | etcdserver: 38b93d7e943acb5d as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-05 19:54:27.140917 I | embed: listening for peers on 192.168.39.244:2380
	raft2023/12/05 19:54:27 INFO: 38b93d7e943acb5d is starting a new election at term 1
	raft2023/12/05 19:54:27 INFO: 38b93d7e943acb5d became candidate at term 2
	raft2023/12/05 19:54:27 INFO: 38b93d7e943acb5d received MsgVoteResp from 38b93d7e943acb5d at term 2
	raft2023/12/05 19:54:27 INFO: 38b93d7e943acb5d became leader at term 2
	raft2023/12/05 19:54:27 INFO: raft.node: 38b93d7e943acb5d elected leader 38b93d7e943acb5d at term 2
	2023-12-05 19:54:27.523893 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-05 19:54:27.525403 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-05 19:54:27.525485 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-05 19:54:27.525508 I | etcdserver: published {Name:ingress-addon-legacy-376951 ClientURLs:[https://192.168.39.244:2379]} to cluster ae521d247b31ac74
	2023-12-05 19:54:27.525542 I | embed: ready to serve client requests
	2023-12-05 19:54:27.525675 I | embed: ready to serve client requests
	2023-12-05 19:54:27.527049 I | embed: serving client requests on 192.168.39.244:2379
	2023-12-05 19:54:27.527099 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-05 19:54:49.931348 W | etcdserver: request "header:<ID:14654022949909935484 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-gljfr\" mod_revision:347 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-gljfr\" value_size:4514 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-gljfr\" > >>" with result "size:16" took too long (256.763556ms) to execute
	2023-12-05 19:54:50.207044 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (533.06994ms) to execute
	2023-12-05 19:54:50.223942 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (339.571882ms) to execute
	2023-12-05 19:54:50.225066 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (368.685059ms) to execute
	2023-12-05 19:54:50.225905 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-376951\" " with result "range_response_count:1 size:6296" took too long (422.85486ms) to execute
	2023-12-05 19:54:50.235763 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-r2cgq\" " with result "range_response_count:1 size:3753" took too long (492.346407ms) to execute
	2023-12-05 19:55:12.315976 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (223.836734ms) to execute
	
	* 
	* ==> kernel <==
	*  19:58:02 up 4 min,  0 users,  load average: 0.46, 0.41, 0.19
	Linux ingress-addon-legacy-376951 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4ac2e18bfbb70915b750e66d6019e4eba92f36ab20385bda43aee71194f980ff] <==
	* I1205 19:54:31.437656       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1205 19:54:31.437753       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 19:54:31.445606       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1205 19:54:31.451304       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1205 19:54:31.451315       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1205 19:54:31.928614       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 19:54:31.970412       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1205 19:54:32.114387       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.244]
	I1205 19:54:32.115362       1 controller.go:609] quota admission added evaluator for: endpoints
	I1205 19:54:32.121677       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:54:32.788458       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1205 19:54:33.739449       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1205 19:54:33.818376       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1205 19:54:34.067622       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:54:48.874030       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1205 19:54:49.306810       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:54:50.209023       1 trace.go:116] Trace[400857834]: "Get" url:/api/v1/namespaces/kube-system/configmaps/coredns,user-agent:kubectl/v1.18.20 (linux/amd64) kubernetes/1f3e19b,client:127.0.0.1 (started: 2023-12-05 19:54:49.564612099 +0000 UTC m=+23.616214701) (total time: 644.383582ms):
	Trace[400857834]: [644.324053ms] [644.315952ms] About to write a response
	I1205 19:54:50.210075       1 trace.go:116] Trace[295631253]: "GuaranteedUpdate etcd3" type:*certificates.CertificateSigningRequest (started: 2023-12-05 19:54:49.649377317 +0000 UTC m=+23.700979939) (total time: 560.6777ms):
	Trace[295631253]: [560.555669ms] [555.243633ms] Transaction committed
	I1205 19:54:50.210568       1 trace.go:116] Trace[941575028]: "Update" url:/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/csr-x6g2l/status,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:certificate-controller,client:192.168.39.244 (started: 2023-12-05 19:54:49.649272588 +0000 UTC m=+23.700875207) (total time: 561.183687ms):
	Trace[941575028]: [561.055947ms] [560.987626ms] Object stored in database
	I1205 19:55:03.592328       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1205 19:55:27.340114       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1205 19:57:55.135923       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [5c50f91585b1d561ccacff96ada98c0a8292646a383b784448a8217d40cd75ec] <==
	* W1205 19:54:49.251915       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-376951. Assuming now as a timestamp.
	I1205 19:54:49.251967       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I1205 19:54:49.252291       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1205 19:54:49.253023       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-376951", UID:"993c17d1-4bbe-49b8-bee5-ba907e070392", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-376951 event: Registered Node ingress-addon-legacy-376951 in Controller
	I1205 19:54:49.269070       1 shared_informer.go:230] Caches are synced for stateful set 
	I1205 19:54:49.269963       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1205 19:54:49.321310       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1205 19:54:49.347147       1 shared_informer.go:230] Caches are synced for resource quota 
	I1205 19:54:49.377663       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"87834274-7f4f-401e-a3d1-ccb99baaaca1", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gljfr
	I1205 19:54:49.417421       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1205 19:54:49.419084       1 shared_informer.go:230] Caches are synced for resource quota 
	I1205 19:54:49.465983       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1205 19:54:49.514854       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1205 19:54:49.515004       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1205 19:54:49.518632       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1205 19:54:49.519001       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1205 19:55:03.573638       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c0c1d5e9-db0e-483c-a995-63464a12655f", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1205 19:55:03.621633       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"608d35bc-9c17-4c96-8d49-d79501f4a411", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-w94vt
	I1205 19:55:03.652834       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7c71bf1b-2e79-4b93-8c55-480d8e11b710", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-sm5lz
	I1205 19:55:03.687072       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"015b5f05-2463-490c-acec-6a6f74a8d50f", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-thjlw
	I1205 19:55:06.321060       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"015b5f05-2463-490c-acec-6a6f74a8d50f", APIVersion:"batch/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1205 19:55:06.352849       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7c71bf1b-2e79-4b93-8c55-480d8e11b710", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1205 19:57:46.986487       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"4217fb8e-b764-439f-a598-a8c980427c64", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1205 19:57:47.014348       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"c28b36e0-30f7-4c68-9246-6c7f7b2a6aac", APIVersion:"apps/v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-kkbfp
	E1205 19:57:59.866458       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-zpxft" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [6a3af8400346c8da3d1dfb4c4482d32547d453e81ed6aaee01dddcb7aa53adf0] <==
	* W1205 19:54:50.977034       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1205 19:54:50.985274       1 node.go:136] Successfully retrieved node IP: 192.168.39.244
	I1205 19:54:50.985418       1 server_others.go:186] Using iptables Proxier.
	I1205 19:54:50.985789       1 server.go:583] Version: v1.18.20
	I1205 19:54:50.989443       1 config.go:133] Starting endpoints config controller
	I1205 19:54:50.989575       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1205 19:54:50.989672       1 config.go:315] Starting service config controller
	I1205 19:54:50.989690       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1205 19:54:51.094928       1 shared_informer.go:230] Caches are synced for service config 
	I1205 19:54:51.095004       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [cf04ca7f900b89e591e5966711eeed06c85245bb21db51bfd5e902edfdebfb19] <==
	* I1205 19:54:30.552778       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1205 19:54:30.553424       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 19:54:30.554009       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 19:54:30.554047       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1205 19:54:30.554967       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:54:30.555181       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:54:30.556034       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:54:30.562858       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 19:54:30.563111       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:54:30.563318       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:54:30.563518       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:54:30.563680       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 19:54:30.563892       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:54:30.564051       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:54:30.564374       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:54:30.564553       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:54:31.408757       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 19:54:31.436504       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:54:31.449435       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:54:31.614932       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:54:31.687934       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 19:54:31.964050       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1205 19:54:33.657290       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1205 19:54:48.985574       1 factory.go:503] pod: kube-system/coredns-66bff467f8-xmzk9 is already present in the active queue
	E1205 19:54:49.025825       1 factory.go:503] pod: kube-system/coredns-66bff467f8-r2cgq is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 19:53:56 UTC, ends at Tue 2023-12-05 19:58:03 UTC. --
	Dec 05 19:55:06 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:55:06.542960    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-jzjq5" (UniqueName: "kubernetes.io/secret/28639f85-2f7f-4722-829e-3f5602f33d34-ingress-nginx-admission-token-jzjq5") on node "ingress-addon-legacy-376951" DevicePath ""
	Dec 05 19:55:07 ingress-addon-legacy-376951 kubelet[1428]: W1205 19:55:07.311243    1428 pod_container_deletor.go:77] Container "1c290aa72f764ea42c0b150efc90126342dd158939936d559c333d4ad9fa0e52" not found in pod's containers
	Dec 05 19:55:07 ingress-addon-legacy-376951 kubelet[1428]: W1205 19:55:07.313819    1428 pod_container_deletor.go:77] Container "36b7106f712e57077b563e69617b08514337033e6b382052e787fd097d410135" not found in pod's containers
	Dec 05 19:55:16 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:55:16.943797    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 05 19:55:17 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:55:17.078423    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-2mq26" (UniqueName: "kubernetes.io/secret/92fc105a-e803-4818-bbdf-6ef3b3c7ed7e-minikube-ingress-dns-token-2mq26") pod "kube-ingress-dns-minikube" (UID: "92fc105a-e803-4818-bbdf-6ef3b3c7ed7e")
	Dec 05 19:55:27 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:55:27.514896    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 05 19:55:27 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:55:27.515836    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-kd6qn" (UniqueName: "kubernetes.io/secret/6e7831e5-e75f-490e-af6e-9525192ab3a6-default-token-kd6qn") pod "nginx" (UID: "6e7831e5-e75f-490e-af6e-9525192ab3a6")
	Dec 05 19:57:47 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:47.033032    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Dec 05 19:57:47 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:47.096335    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-kd6qn" (UniqueName: "kubernetes.io/secret/50e27d6d-cc72-4a92-8925-4789a7bb5406-default-token-kd6qn") pod "hello-world-app-5f5d8b66bb-kkbfp" (UID: "50e27d6d-cc72-4a92-8925-4789a7bb5406")
	Dec 05 19:57:48 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:48.605106    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5bebbd4910ec31bb1e27b4cdadcda5e007dbb4975fe30a04a8fcb8266c49f711
	Dec 05 19:57:48 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:48.704234    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-2mq26" (UniqueName: "kubernetes.io/secret/92fc105a-e803-4818-bbdf-6ef3b3c7ed7e-minikube-ingress-dns-token-2mq26") pod "92fc105a-e803-4818-bbdf-6ef3b3c7ed7e" (UID: "92fc105a-e803-4818-bbdf-6ef3b3c7ed7e")
	Dec 05 19:57:48 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:48.722220    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92fc105a-e803-4818-bbdf-6ef3b3c7ed7e-minikube-ingress-dns-token-2mq26" (OuterVolumeSpecName: "minikube-ingress-dns-token-2mq26") pod "92fc105a-e803-4818-bbdf-6ef3b3c7ed7e" (UID: "92fc105a-e803-4818-bbdf-6ef3b3c7ed7e"). InnerVolumeSpecName "minikube-ingress-dns-token-2mq26". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:57:48 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:48.804577    1428 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-2mq26" (UniqueName: "kubernetes.io/secret/92fc105a-e803-4818-bbdf-6ef3b3c7ed7e-minikube-ingress-dns-token-2mq26") on node "ingress-addon-legacy-376951" DevicePath ""
	Dec 05 19:57:48 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:48.990122    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5bebbd4910ec31bb1e27b4cdadcda5e007dbb4975fe30a04a8fcb8266c49f711
	Dec 05 19:57:48 ingress-addon-legacy-376951 kubelet[1428]: E1205 19:57:48.991537    1428 remote_runtime.go:295] ContainerStatus "5bebbd4910ec31bb1e27b4cdadcda5e007dbb4975fe30a04a8fcb8266c49f711" from runtime service failed: rpc error: code = NotFound desc = could not find container "5bebbd4910ec31bb1e27b4cdadcda5e007dbb4975fe30a04a8fcb8266c49f711": container with ID starting with 5bebbd4910ec31bb1e27b4cdadcda5e007dbb4975fe30a04a8fcb8266c49f711 not found: ID does not exist
	Dec 05 19:57:55 ingress-addon-legacy-376951 kubelet[1428]: E1205 19:57:55.115613    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w94vt.179e07ca6886664f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w94vt", UID:"d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d", APIVersion:"v1", ResourceVersion:"456", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-376951"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc153fe10c69e884f, ext:201547438324, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc153fe10c69e884f, ext:201547438324, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w94vt.179e07ca6886664f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 05 19:57:55 ingress-addon-legacy-376951 kubelet[1428]: E1205 19:57:55.134742    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w94vt.179e07ca6886664f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w94vt", UID:"d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d", APIVersion:"v1", ResourceVersion:"456", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-376951"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc153fe10c69e884f, ext:201547438324, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc153fe10c7912c31, ext:201563339988, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w94vt.179e07ca6886664f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 05 19:57:57 ingress-addon-legacy-376951 kubelet[1428]: W1205 19:57:57.657036    1428 pod_container_deletor.go:77] Container "c61b66388a865437174e5513f8fa48b82923de48c1e4b8acb79e509b1d242206" not found in pod's containers
	Dec 05 19:57:59 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:59.240598    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d-webhook-cert") pod "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d" (UID: "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d")
	Dec 05 19:57:59 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:59.240641    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-665zn" (UniqueName: "kubernetes.io/secret/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d-ingress-nginx-token-665zn") pod "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d" (UID: "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d")
	Dec 05 19:57:59 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:59.245480    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d-ingress-nginx-token-665zn" (OuterVolumeSpecName: "ingress-nginx-token-665zn") pod "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d" (UID: "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d"). InnerVolumeSpecName "ingress-nginx-token-665zn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:57:59 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:59.245625    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d" (UID: "d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 05 19:57:59 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:59.340982    1428 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d-webhook-cert") on node "ingress-addon-legacy-376951" DevicePath ""
	Dec 05 19:57:59 ingress-addon-legacy-376951 kubelet[1428]: I1205 19:57:59.341045    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-token-665zn" (UniqueName: "kubernetes.io/secret/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d-ingress-nginx-token-665zn") on node "ingress-addon-legacy-376951" DevicePath ""
	Dec 05 19:58:00 ingress-addon-legacy-376951 kubelet[1428]: W1205 19:58:00.157136    1428 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d9f0f52c-9478-4cb0-8a93-6e0ef6bac73d/volumes" does not exist
	
	* 
	* ==> storage-provisioner [b262cb5398138338958b5bd3c90221cbe090bfb5688c3e9e477e59f3c8837b2a] <==
	* I1205 19:54:51.771200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:54:51.790749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:54:51.790897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:54:51.798152       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:54:51.798390       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-376951_68417266-0bf0-47db-94cc-202f5ae7f033!
	I1205 19:54:51.801427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc0401e0-1c82-41ff-b627-d44650b50ed2", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-376951_68417266-0bf0-47db-94cc-202f5ae7f033 became leader
	I1205 19:54:51.898862       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-376951_68417266-0bf0-47db-94cc-202f5ae7f033!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-376951 -n ingress-addon-legacy-376951
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-376951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (166.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-6www8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-6www8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-6www8 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (201.569818ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-6www8): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-phsxm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-phsxm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-phsxm -- sh -c "ping -c 1 192.168.39.1": exit status 1 (185.682918ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-phsxm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-558947 -n multinode-558947
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-558947 logs -n 25: (1.345018666s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-202745 ssh -- ls                    | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:02 UTC | 05 Dec 23 20:02 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-202745 ssh --                       | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:02 UTC | 05 Dec 23 20:02 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-202745                           | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:02 UTC | 05 Dec 23 20:02 UTC |
	| start   | -p mount-start-2-202745                           | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:02 UTC | 05 Dec 23 20:03 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC |                     |
	|         | --profile mount-start-2-202745                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-202745 ssh -- ls                    | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-202745 ssh --                       | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-202745                           | mount-start-2-202745 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	| delete  | -p mount-start-1-189139                           | mount-start-1-189139 | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:03 UTC |
	| start   | -p multinode-558947                               | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:03 UTC | 05 Dec 23 20:04 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- apply -f                   | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:04 UTC | 05 Dec 23 20:04 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- rollout                    | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:04 UTC | 05 Dec 23 20:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- get pods -o                | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- get pods -o                | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-6www8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-phsxm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-6www8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-phsxm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-6www8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-phsxm -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- get pods -o                | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-6www8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC |                     |
	|         | busybox-5bc68d56bd-6www8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | busybox-5bc68d56bd-phsxm                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-558947 -- exec                       | multinode-558947     | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC |                     |
	|         | busybox-5bc68d56bd-phsxm -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:03:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:03:10.625496   26786 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:03:10.625780   26786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:03:10.625790   26786 out.go:309] Setting ErrFile to fd 2...
	I1205 20:03:10.625796   26786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:03:10.625960   26786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:03:10.626548   26786 out.go:303] Setting JSON to false
	I1205 20:03:10.627446   26786 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2744,"bootTime":1701803847,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:03:10.627504   26786 start.go:138] virtualization: kvm guest
	I1205 20:03:10.630078   26786 out.go:177] * [multinode-558947] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:03:10.631829   26786 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:03:10.631827   26786 notify.go:220] Checking for updates...
	I1205 20:03:10.633514   26786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:03:10.635105   26786 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:03:10.636522   26786 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:03:10.637871   26786 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:03:10.639280   26786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:03:10.640960   26786 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:03:10.675470   26786 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:03:10.676792   26786 start.go:298] selected driver: kvm2
	I1205 20:03:10.676803   26786 start.go:902] validating driver "kvm2" against <nil>
	I1205 20:03:10.676821   26786 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:03:10.677527   26786 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:03:10.677604   26786 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:03:10.693010   26786 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:03:10.693070   26786 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 20:03:10.693261   26786 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:03:10.693321   26786 cni.go:84] Creating CNI manager for ""
	I1205 20:03:10.693332   26786 cni.go:136] 0 nodes found, recommending kindnet
	I1205 20:03:10.693341   26786 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:03:10.693348   26786 start_flags.go:323] config:
	{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:03:10.693478   26786 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:03:10.695222   26786 out.go:177] * Starting control plane node multinode-558947 in cluster multinode-558947
	I1205 20:03:10.696504   26786 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:03:10.696537   26786 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:03:10.696544   26786 cache.go:56] Caching tarball of preloaded images
	I1205 20:03:10.696627   26786 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:03:10.696641   26786 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:03:10.696942   26786 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:03:10.696963   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json: {Name:mk0166cd7e7c445d274ab399d732b688e8bad652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:10.697111   26786 start.go:365] acquiring machines lock for multinode-558947: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:03:10.697144   26786 start.go:369] acquired machines lock for "multinode-558947" in 15.599µs
	I1205 20:03:10.697159   26786 start.go:93] Provisioning new machine with config: &{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:03:10.697207   26786 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:03:10.698761   26786 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:03:10.698894   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:03:10.698946   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:03:10.712811   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I1205 20:03:10.713233   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:03:10.713751   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:03:10.713776   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:03:10.714070   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:03:10.714254   26786 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:03:10.714420   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:10.714549   26786 start.go:159] libmachine.API.Create for "multinode-558947" (driver="kvm2")
	I1205 20:03:10.714597   26786 client.go:168] LocalClient.Create starting
	I1205 20:03:10.714624   26786 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem
	I1205 20:03:10.714656   26786 main.go:141] libmachine: Decoding PEM data...
	I1205 20:03:10.714679   26786 main.go:141] libmachine: Parsing certificate...
	I1205 20:03:10.714730   26786 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem
	I1205 20:03:10.714747   26786 main.go:141] libmachine: Decoding PEM data...
	I1205 20:03:10.714759   26786 main.go:141] libmachine: Parsing certificate...
	I1205 20:03:10.714772   26786 main.go:141] libmachine: Running pre-create checks...
	I1205 20:03:10.714780   26786 main.go:141] libmachine: (multinode-558947) Calling .PreCreateCheck
	I1205 20:03:10.715097   26786 main.go:141] libmachine: (multinode-558947) Calling .GetConfigRaw
	I1205 20:03:10.715461   26786 main.go:141] libmachine: Creating machine...
	I1205 20:03:10.715476   26786 main.go:141] libmachine: (multinode-558947) Calling .Create
	I1205 20:03:10.715587   26786 main.go:141] libmachine: (multinode-558947) Creating KVM machine...
	I1205 20:03:10.716876   26786 main.go:141] libmachine: (multinode-558947) DBG | found existing default KVM network
	I1205 20:03:10.717460   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:10.717339   26809 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I1205 20:03:10.722371   26786 main.go:141] libmachine: (multinode-558947) DBG | trying to create private KVM network mk-multinode-558947 192.168.39.0/24...
	I1205 20:03:10.791902   26786 main.go:141] libmachine: (multinode-558947) DBG | private KVM network mk-multinode-558947 192.168.39.0/24 created
	I1205 20:03:10.792005   26786 main.go:141] libmachine: (multinode-558947) Setting up store path in /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947 ...
	I1205 20:03:10.792045   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:10.791895   26809 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:03:10.792065   26786 main.go:141] libmachine: (multinode-558947) Building disk image from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 20:03:10.792344   26786 main.go:141] libmachine: (multinode-558947) Downloading /home/jenkins/minikube-integration/17731-6237/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1205 20:03:10.993830   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:10.993701   26809 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa...
	I1205 20:03:11.110309   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:11.110162   26809 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/multinode-558947.rawdisk...
	I1205 20:03:11.110340   26786 main.go:141] libmachine: (multinode-558947) DBG | Writing magic tar header
	I1205 20:03:11.110362   26786 main.go:141] libmachine: (multinode-558947) DBG | Writing SSH key tar header
	I1205 20:03:11.110376   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:11.110285   26809 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947 ...
	I1205 20:03:11.110401   26786 main.go:141] libmachine: (multinode-558947) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947 (perms=drwx------)
	I1205 20:03:11.110429   26786 main.go:141] libmachine: (multinode-558947) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:03:11.110440   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947
	I1205 20:03:11.110448   26786 main.go:141] libmachine: (multinode-558947) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube (perms=drwxr-xr-x)
	I1205 20:03:11.110464   26786 main.go:141] libmachine: (multinode-558947) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237 (perms=drwxrwxr-x)
	I1205 20:03:11.110480   26786 main.go:141] libmachine: (multinode-558947) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:03:11.110494   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines
	I1205 20:03:11.110509   26786 main.go:141] libmachine: (multinode-558947) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:03:11.110518   26786 main.go:141] libmachine: (multinode-558947) Creating domain...
	I1205 20:03:11.110526   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:03:11.110534   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237
	I1205 20:03:11.110541   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:03:11.110547   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:03:11.110559   26786 main.go:141] libmachine: (multinode-558947) DBG | Checking permissions on dir: /home
	I1205 20:03:11.110608   26786 main.go:141] libmachine: (multinode-558947) DBG | Skipping /home - not owner
	I1205 20:03:11.111658   26786 main.go:141] libmachine: (multinode-558947) define libvirt domain using xml: 
	I1205 20:03:11.111693   26786 main.go:141] libmachine: (multinode-558947) <domain type='kvm'>
	I1205 20:03:11.111707   26786 main.go:141] libmachine: (multinode-558947)   <name>multinode-558947</name>
	I1205 20:03:11.111721   26786 main.go:141] libmachine: (multinode-558947)   <memory unit='MiB'>2200</memory>
	I1205 20:03:11.111775   26786 main.go:141] libmachine: (multinode-558947)   <vcpu>2</vcpu>
	I1205 20:03:11.111786   26786 main.go:141] libmachine: (multinode-558947)   <features>
	I1205 20:03:11.111795   26786 main.go:141] libmachine: (multinode-558947)     <acpi/>
	I1205 20:03:11.111806   26786 main.go:141] libmachine: (multinode-558947)     <apic/>
	I1205 20:03:11.111812   26786 main.go:141] libmachine: (multinode-558947)     <pae/>
	I1205 20:03:11.111822   26786 main.go:141] libmachine: (multinode-558947)     
	I1205 20:03:11.111828   26786 main.go:141] libmachine: (multinode-558947)   </features>
	I1205 20:03:11.111837   26786 main.go:141] libmachine: (multinode-558947)   <cpu mode='host-passthrough'>
	I1205 20:03:11.111847   26786 main.go:141] libmachine: (multinode-558947)   
	I1205 20:03:11.111855   26786 main.go:141] libmachine: (multinode-558947)   </cpu>
	I1205 20:03:11.111863   26786 main.go:141] libmachine: (multinode-558947)   <os>
	I1205 20:03:11.111875   26786 main.go:141] libmachine: (multinode-558947)     <type>hvm</type>
	I1205 20:03:11.111884   26786 main.go:141] libmachine: (multinode-558947)     <boot dev='cdrom'/>
	I1205 20:03:11.111907   26786 main.go:141] libmachine: (multinode-558947)     <boot dev='hd'/>
	I1205 20:03:11.111921   26786 main.go:141] libmachine: (multinode-558947)     <bootmenu enable='no'/>
	I1205 20:03:11.111926   26786 main.go:141] libmachine: (multinode-558947)   </os>
	I1205 20:03:11.111932   26786 main.go:141] libmachine: (multinode-558947)   <devices>
	I1205 20:03:11.111938   26786 main.go:141] libmachine: (multinode-558947)     <disk type='file' device='cdrom'>
	I1205 20:03:11.111950   26786 main.go:141] libmachine: (multinode-558947)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/boot2docker.iso'/>
	I1205 20:03:11.111958   26786 main.go:141] libmachine: (multinode-558947)       <target dev='hdc' bus='scsi'/>
	I1205 20:03:11.112003   26786 main.go:141] libmachine: (multinode-558947)       <readonly/>
	I1205 20:03:11.112031   26786 main.go:141] libmachine: (multinode-558947)     </disk>
	I1205 20:03:11.112041   26786 main.go:141] libmachine: (multinode-558947)     <disk type='file' device='disk'>
	I1205 20:03:11.112054   26786 main.go:141] libmachine: (multinode-558947)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:03:11.112069   26786 main.go:141] libmachine: (multinode-558947)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/multinode-558947.rawdisk'/>
	I1205 20:03:11.112077   26786 main.go:141] libmachine: (multinode-558947)       <target dev='hda' bus='virtio'/>
	I1205 20:03:11.112084   26786 main.go:141] libmachine: (multinode-558947)     </disk>
	I1205 20:03:11.112092   26786 main.go:141] libmachine: (multinode-558947)     <interface type='network'>
	I1205 20:03:11.112099   26786 main.go:141] libmachine: (multinode-558947)       <source network='mk-multinode-558947'/>
	I1205 20:03:11.112104   26786 main.go:141] libmachine: (multinode-558947)       <model type='virtio'/>
	I1205 20:03:11.112112   26786 main.go:141] libmachine: (multinode-558947)     </interface>
	I1205 20:03:11.112118   26786 main.go:141] libmachine: (multinode-558947)     <interface type='network'>
	I1205 20:03:11.112127   26786 main.go:141] libmachine: (multinode-558947)       <source network='default'/>
	I1205 20:03:11.112132   26786 main.go:141] libmachine: (multinode-558947)       <model type='virtio'/>
	I1205 20:03:11.112141   26786 main.go:141] libmachine: (multinode-558947)     </interface>
	I1205 20:03:11.112146   26786 main.go:141] libmachine: (multinode-558947)     <serial type='pty'>
	I1205 20:03:11.112155   26786 main.go:141] libmachine: (multinode-558947)       <target port='0'/>
	I1205 20:03:11.112166   26786 main.go:141] libmachine: (multinode-558947)     </serial>
	I1205 20:03:11.112178   26786 main.go:141] libmachine: (multinode-558947)     <console type='pty'>
	I1205 20:03:11.112187   26786 main.go:141] libmachine: (multinode-558947)       <target type='serial' port='0'/>
	I1205 20:03:11.112193   26786 main.go:141] libmachine: (multinode-558947)     </console>
	I1205 20:03:11.112201   26786 main.go:141] libmachine: (multinode-558947)     <rng model='virtio'>
	I1205 20:03:11.112208   26786 main.go:141] libmachine: (multinode-558947)       <backend model='random'>/dev/random</backend>
	I1205 20:03:11.112215   26786 main.go:141] libmachine: (multinode-558947)     </rng>
	I1205 20:03:11.112221   26786 main.go:141] libmachine: (multinode-558947)     
	I1205 20:03:11.112228   26786 main.go:141] libmachine: (multinode-558947)     
	I1205 20:03:11.112234   26786 main.go:141] libmachine: (multinode-558947)   </devices>
	I1205 20:03:11.112250   26786 main.go:141] libmachine: (multinode-558947) </domain>
	I1205 20:03:11.112260   26786 main.go:141] libmachine: (multinode-558947) 
	I1205 20:03:11.117088   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:26:08:e8 in network default
	I1205 20:03:11.117617   26786 main.go:141] libmachine: (multinode-558947) Ensuring networks are active...
	I1205 20:03:11.117636   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:11.118342   26786 main.go:141] libmachine: (multinode-558947) Ensuring network default is active
	I1205 20:03:11.118661   26786 main.go:141] libmachine: (multinode-558947) Ensuring network mk-multinode-558947 is active
	I1205 20:03:11.119113   26786 main.go:141] libmachine: (multinode-558947) Getting domain xml...
	I1205 20:03:11.119840   26786 main.go:141] libmachine: (multinode-558947) Creating domain...
	I1205 20:03:12.347865   26786 main.go:141] libmachine: (multinode-558947) Waiting to get IP...
	I1205 20:03:12.348629   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:12.349024   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:12.349043   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:12.349000   26809 retry.go:31] will retry after 274.801578ms: waiting for machine to come up
	I1205 20:03:12.625502   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:12.625918   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:12.625943   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:12.625863   26809 retry.go:31] will retry after 325.212438ms: waiting for machine to come up
	I1205 20:03:12.952532   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:12.952985   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:12.953015   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:12.952930   26809 retry.go:31] will retry after 451.758207ms: waiting for machine to come up
	I1205 20:03:13.406648   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:13.407102   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:13.407128   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:13.407072   26809 retry.go:31] will retry after 543.43832ms: waiting for machine to come up
	I1205 20:03:13.952587   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:13.953063   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:13.953084   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:13.953025   26809 retry.go:31] will retry after 588.322695ms: waiting for machine to come up
	I1205 20:03:14.542990   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:14.543513   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:14.543532   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:14.543462   26809 retry.go:31] will retry after 729.276774ms: waiting for machine to come up
	I1205 20:03:15.274471   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:15.274905   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:15.274939   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:15.274851   26809 retry.go:31] will retry after 1.110702705s: waiting for machine to come up
	I1205 20:03:16.386869   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:16.387410   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:16.387468   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:16.387364   26809 retry.go:31] will retry after 1.306072062s: waiting for machine to come up
	I1205 20:03:17.695815   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:17.696316   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:17.696347   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:17.696249   26809 retry.go:31] will retry after 1.431431765s: waiting for machine to come up
	I1205 20:03:19.130092   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:19.130544   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:19.130566   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:19.130497   26809 retry.go:31] will retry after 2.261317101s: waiting for machine to come up
	I1205 20:03:21.393977   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:21.394466   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:21.394497   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:21.394421   26809 retry.go:31] will retry after 2.197347724s: waiting for machine to come up
	I1205 20:03:23.594915   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:23.595363   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:23.595394   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:23.595322   26809 retry.go:31] will retry after 2.978886358s: waiting for machine to come up
	I1205 20:03:26.575379   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:26.575804   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:26.575863   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:26.575790   26809 retry.go:31] will retry after 3.210652256s: waiting for machine to come up
	I1205 20:03:29.787663   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:29.787986   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:03:29.788102   26786 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:03:29.787950   26809 retry.go:31] will retry after 4.426294848s: waiting for machine to come up
	I1205 20:03:34.218737   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.219247   26786 main.go:141] libmachine: (multinode-558947) Found IP for machine: 192.168.39.3
	I1205 20:03:34.219275   26786 main.go:141] libmachine: (multinode-558947) Reserving static IP address...
	I1205 20:03:34.219291   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has current primary IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.219705   26786 main.go:141] libmachine: (multinode-558947) DBG | unable to find host DHCP lease matching {name: "multinode-558947", mac: "52:54:00:ca:d0:61", ip: "192.168.39.3"} in network mk-multinode-558947
	I1205 20:03:34.292265   26786 main.go:141] libmachine: (multinode-558947) DBG | Getting to WaitForSSH function...
	I1205 20:03:34.292296   26786 main.go:141] libmachine: (multinode-558947) Reserved static IP address: 192.168.39.3
	I1205 20:03:34.292325   26786 main.go:141] libmachine: (multinode-558947) Waiting for SSH to be available...
	I1205 20:03:34.294979   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.295369   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.295403   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.295680   26786 main.go:141] libmachine: (multinode-558947) DBG | Using SSH client type: external
	I1205 20:03:34.295704   26786 main.go:141] libmachine: (multinode-558947) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa (-rw-------)
	I1205 20:03:34.295724   26786 main.go:141] libmachine: (multinode-558947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:03:34.295735   26786 main.go:141] libmachine: (multinode-558947) DBG | About to run SSH command:
	I1205 20:03:34.295747   26786 main.go:141] libmachine: (multinode-558947) DBG | exit 0
	I1205 20:03:34.398070   26786 main.go:141] libmachine: (multinode-558947) DBG | SSH cmd err, output: <nil>: 
	I1205 20:03:34.398327   26786 main.go:141] libmachine: (multinode-558947) KVM machine creation complete!
	I1205 20:03:34.398643   26786 main.go:141] libmachine: (multinode-558947) Calling .GetConfigRaw
	I1205 20:03:34.399190   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:34.399401   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:34.399590   26786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:03:34.399604   26786 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:03:34.400740   26786 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:03:34.400757   26786 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:03:34.400763   26786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:03:34.400769   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:34.403157   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.403505   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.403534   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.403668   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:34.403856   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.404041   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.404181   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:34.404340   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:34.404773   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:03:34.404795   26786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:03:34.533673   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:03:34.533701   26786 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:03:34.533713   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:34.536182   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.536432   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.536456   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.536688   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:34.536899   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.537046   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.537183   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:34.537360   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:34.537685   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:03:34.537698   26786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:03:34.667138   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1205 20:03:34.667235   26786 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:03:34.667246   26786 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:03:34.667254   26786 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:03:34.667546   26786 buildroot.go:166] provisioning hostname "multinode-558947"
	I1205 20:03:34.667573   26786 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:03:34.667807   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:34.670335   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.670662   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.670705   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.670785   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:34.670958   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.671119   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.671265   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:34.671410   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:34.671721   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:03:34.671733   26786 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-558947 && echo "multinode-558947" | sudo tee /etc/hostname
	I1205 20:03:34.816457   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-558947
	
	I1205 20:03:34.816496   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:34.819400   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.819713   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.819742   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.819959   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:34.820174   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.820302   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:34.820453   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:34.820592   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:34.820938   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:03:34.820955   26786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-558947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-558947/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-558947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:03:34.959080   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:03:34.959136   26786 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:03:34.959189   26786 buildroot.go:174] setting up certificates
	I1205 20:03:34.959200   26786 provision.go:83] configureAuth start
	I1205 20:03:34.959210   26786 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:03:34.959480   26786 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:03:34.962040   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.962527   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.962550   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.962708   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:34.964915   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.965286   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:34.965315   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:34.965420   26786 provision.go:138] copyHostCerts
	I1205 20:03:34.965446   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:03:34.965481   26786 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:03:34.965508   26786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:03:34.965563   26786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:03:34.965655   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:03:34.965674   26786 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:03:34.965682   26786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:03:34.965702   26786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:03:34.965756   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:03:34.965771   26786 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:03:34.965778   26786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:03:34.965794   26786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:03:34.965848   26786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.multinode-558947 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube multinode-558947]
	I1205 20:03:35.125025   26786 provision.go:172] copyRemoteCerts
	I1205 20:03:35.125089   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:03:35.125110   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:35.127471   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.127721   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.127743   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.127943   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:35.128121   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.128255   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:35.128438   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:03:35.223488   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:03:35.223564   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:03:35.248129   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:03:35.248206   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:03:35.272319   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:03:35.272397   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:03:35.295747   26786 provision.go:86] duration metric: configureAuth took 336.534397ms
	I1205 20:03:35.295774   26786 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:03:35.295975   26786 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:03:35.296064   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:35.298263   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.298566   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.298594   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.298771   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:35.299039   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.299207   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.299378   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:35.299543   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:35.299869   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:03:35.299884   26786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:03:35.636423   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:03:35.636451   26786 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:03:35.636463   26786 main.go:141] libmachine: (multinode-558947) Calling .GetURL
	I1205 20:03:35.637734   26786 main.go:141] libmachine: (multinode-558947) DBG | Using libvirt version 6000000
	I1205 20:03:35.639825   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.640132   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.640161   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.640310   26786 main.go:141] libmachine: Docker is up and running!
	I1205 20:03:35.640329   26786 main.go:141] libmachine: Reticulating splines...
	I1205 20:03:35.640349   26786 client.go:171] LocalClient.Create took 24.925730253s
	I1205 20:03:35.640377   26786 start.go:167] duration metric: libmachine.API.Create for "multinode-558947" took 24.925828722s
	I1205 20:03:35.640390   26786 start.go:300] post-start starting for "multinode-558947" (driver="kvm2")
	I1205 20:03:35.640404   26786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:03:35.640426   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:35.640651   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:03:35.640673   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:35.642946   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.643306   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.643335   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.643459   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:35.643627   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.643771   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:35.643937   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:03:35.741137   26786 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:03:35.745603   26786 command_runner.go:130] > NAME=Buildroot
	I1205 20:03:35.745624   26786 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1205 20:03:35.745631   26786 command_runner.go:130] > ID=buildroot
	I1205 20:03:35.745639   26786 command_runner.go:130] > VERSION_ID=2021.02.12
	I1205 20:03:35.745647   26786 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1205 20:03:35.745702   26786 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:03:35.745726   26786 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:03:35.745796   26786 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:03:35.745912   26786 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:03:35.745927   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /etc/ssl/certs/134102.pem
	I1205 20:03:35.746035   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:03:35.755463   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:03:35.778384   26786 start.go:303] post-start completed in 137.978826ms
	I1205 20:03:35.778427   26786 main.go:141] libmachine: (multinode-558947) Calling .GetConfigRaw
	I1205 20:03:35.778993   26786 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:03:35.781232   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.781553   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.781579   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.781832   26786 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:03:35.782025   26786 start.go:128] duration metric: createHost completed in 25.084808519s
	I1205 20:03:35.782049   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:35.784506   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.784825   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.784854   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.784987   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:35.785135   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.785239   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.785354   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:35.785530   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:03:35.785972   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:03:35.785986   26786 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:03:35.919050   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701806615.901903777
	
	I1205 20:03:35.919076   26786 fix.go:206] guest clock: 1701806615.901903777
	I1205 20:03:35.919086   26786 fix.go:219] Guest: 2023-12-05 20:03:35.901903777 +0000 UTC Remote: 2023-12-05 20:03:35.782039144 +0000 UTC m=+25.202957049 (delta=119.864633ms)
	I1205 20:03:35.919112   26786 fix.go:190] guest clock delta is within tolerance: 119.864633ms
	I1205 20:03:35.919119   26786 start.go:83] releasing machines lock for "multinode-558947", held for 25.221968018s
	I1205 20:03:35.919154   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:35.919457   26786 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:03:35.921937   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.922328   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.922380   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.922529   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:35.923047   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:35.923224   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:03:35.923303   26786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:03:35.923342   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:35.923487   26786 ssh_runner.go:195] Run: cat /version.json
	I1205 20:03:35.923515   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:03:35.926355   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.926451   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.926744   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.926781   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:35.926807   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.926825   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:35.926914   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:35.927012   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:03:35.927099   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.927170   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:03:35.927250   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:35.927307   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:03:35.927419   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:03:35.927451   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:03:36.020413   26786 command_runner.go:130] > {"iso_version": "v1.32.1-1701387192-17703", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "196015715c4eb12e436d5bb69e555ba604cda88e"}
	I1205 20:03:36.043449   26786 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:03:36.044469   26786 ssh_runner.go:195] Run: systemctl --version
	I1205 20:03:36.050110   26786 command_runner.go:130] > systemd 247 (247)
	I1205 20:03:36.050136   26786 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1205 20:03:36.050613   26786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:03:36.217756   26786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:03:36.223841   26786 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:03:36.224114   26786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:03:36.224185   26786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:03:36.239800   26786 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1205 20:03:36.240144   26786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:03:36.240165   26786 start.go:475] detecting cgroup driver to use...
	I1205 20:03:36.240230   26786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:03:36.257434   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:03:36.271370   26786 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:03:36.271428   26786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:03:36.285151   26786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:03:36.298337   26786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:03:36.311875   26786 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1205 20:03:36.398469   26786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:03:36.412795   26786 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 20:03:36.511292   26786 docker.go:219] disabling docker service ...
	I1205 20:03:36.511364   26786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:03:36.526330   26786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:03:36.538219   26786 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1205 20:03:36.538612   26786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:03:36.639640   26786 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 20:03:36.639763   26786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:03:36.653702   26786 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1205 20:03:36.654209   26786 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 20:03:36.745397   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:03:36.759647   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:03:36.778309   26786 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:03:36.778359   26786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:03:36.778400   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:36.788760   26786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:03:36.788828   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:36.799151   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:36.811200   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:03:36.823256   26786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:03:36.835467   26786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:03:36.846149   26786 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:03:36.846185   26786 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:03:36.846239   26786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:03:36.861971   26786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:03:36.871909   26786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:03:36.969554   26786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:03:37.149207   26786 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:03:37.149269   26786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:03:37.154310   26786 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:03:37.154323   26786 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:03:37.154329   26786 command_runner.go:130] > Device: 16h/22d	Inode: 778         Links: 1
	I1205 20:03:37.154336   26786 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:03:37.154340   26786 command_runner.go:130] > Access: 2023-12-05 20:03:37.114805608 +0000
	I1205 20:03:37.154346   26786 command_runner.go:130] > Modify: 2023-12-05 20:03:37.114805608 +0000
	I1205 20:03:37.154352   26786 command_runner.go:130] > Change: 2023-12-05 20:03:37.114805608 +0000
	I1205 20:03:37.154357   26786 command_runner.go:130] >  Birth: -
	I1205 20:03:37.154552   26786 start.go:543] Will wait 60s for crictl version
	I1205 20:03:37.154626   26786 ssh_runner.go:195] Run: which crictl
	I1205 20:03:37.158436   26786 command_runner.go:130] > /usr/bin/crictl
	I1205 20:03:37.158729   26786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:03:37.194474   26786 command_runner.go:130] > Version:  0.1.0
	I1205 20:03:37.194498   26786 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:03:37.194507   26786 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1205 20:03:37.194514   26786 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:03:37.194607   26786 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:03:37.194686   26786 ssh_runner.go:195] Run: crio --version
	I1205 20:03:37.238673   26786 command_runner.go:130] > crio version 1.24.1
	I1205 20:03:37.238704   26786 command_runner.go:130] > Version:          1.24.1
	I1205 20:03:37.238716   26786 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:03:37.238724   26786 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:03:37.238734   26786 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:03:37.238742   26786 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:03:37.238749   26786 command_runner.go:130] > Compiler:         gc
	I1205 20:03:37.238757   26786 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:03:37.238776   26786 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:03:37.238793   26786 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:03:37.238800   26786 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:03:37.238808   26786 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:03:37.240160   26786 ssh_runner.go:195] Run: crio --version
	I1205 20:03:37.290671   26786 command_runner.go:130] > crio version 1.24.1
	I1205 20:03:37.290695   26786 command_runner.go:130] > Version:          1.24.1
	I1205 20:03:37.290706   26786 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:03:37.290713   26786 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:03:37.290724   26786 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:03:37.290732   26786 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:03:37.290741   26786 command_runner.go:130] > Compiler:         gc
	I1205 20:03:37.290748   26786 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:03:37.290767   26786 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:03:37.290782   26786 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:03:37.290790   26786 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:03:37.290798   26786 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:03:37.292722   26786 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:03:37.293959   26786 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:03:37.296671   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:37.296990   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:03:37.297011   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:03:37.297230   26786 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:03:37.301535   26786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:03:37.313478   26786 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:03:37.313535   26786 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:03:37.349498   26786 command_runner.go:130] > {
	I1205 20:03:37.349516   26786 command_runner.go:130] >   "images": [
	I1205 20:03:37.349520   26786 command_runner.go:130] >   ]
	I1205 20:03:37.349524   26786 command_runner.go:130] > }
	I1205 20:03:37.349713   26786 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:03:37.349777   26786 ssh_runner.go:195] Run: which lz4
	I1205 20:03:37.353633   26786 command_runner.go:130] > /usr/bin/lz4
	I1205 20:03:37.353660   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 20:03:37.353740   26786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:03:37.357937   26786 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:03:37.358113   26786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:03:37.358140   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:03:39.147378   26786 crio.go:444] Took 1.793664 seconds to copy over tarball
	I1205 20:03:39.147442   26786 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:03:42.172174   26786 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.024705277s)
	I1205 20:03:42.172209   26786 crio.go:451] Took 3.024808 seconds to extract the tarball
	I1205 20:03:42.172217   26786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:03:42.217636   26786 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:03:42.296833   26786 command_runner.go:130] > {
	I1205 20:03:42.296867   26786 command_runner.go:130] >   "images": [
	I1205 20:03:42.296874   26786 command_runner.go:130] >     {
	I1205 20:03:42.296881   26786 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1205 20:03:42.296886   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.296892   26786 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1205 20:03:42.296896   26786 command_runner.go:130] >       ],
	I1205 20:03:42.296900   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.296910   26786 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1205 20:03:42.296920   26786 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1205 20:03:42.296925   26786 command_runner.go:130] >       ],
	I1205 20:03:42.296932   26786 command_runner.go:130] >       "size": "65258016",
	I1205 20:03:42.296940   26786 command_runner.go:130] >       "uid": null,
	I1205 20:03:42.296945   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.296953   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.296957   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.296961   26786 command_runner.go:130] >     },
	I1205 20:03:42.296964   26786 command_runner.go:130] >     {
	I1205 20:03:42.296970   26786 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 20:03:42.296975   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.296980   26786 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:03:42.296984   26786 command_runner.go:130] >       ],
	I1205 20:03:42.296988   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.296996   26786 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 20:03:42.297004   26786 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 20:03:42.297008   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297017   26786 command_runner.go:130] >       "size": "31470524",
	I1205 20:03:42.297021   26786 command_runner.go:130] >       "uid": null,
	I1205 20:03:42.297025   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297029   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297034   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297037   26786 command_runner.go:130] >     },
	I1205 20:03:42.297043   26786 command_runner.go:130] >     {
	I1205 20:03:42.297049   26786 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1205 20:03:42.297053   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297061   26786 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1205 20:03:42.297064   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297069   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297078   26786 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1205 20:03:42.297093   26786 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1205 20:03:42.297103   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297109   26786 command_runner.go:130] >       "size": "53621675",
	I1205 20:03:42.297116   26786 command_runner.go:130] >       "uid": null,
	I1205 20:03:42.297124   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297130   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297135   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297139   26786 command_runner.go:130] >     },
	I1205 20:03:42.297143   26786 command_runner.go:130] >     {
	I1205 20:03:42.297150   26786 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1205 20:03:42.297154   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297159   26786 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1205 20:03:42.297163   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297167   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297177   26786 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1205 20:03:42.297184   26786 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1205 20:03:42.297194   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297198   26786 command_runner.go:130] >       "size": "295456551",
	I1205 20:03:42.297202   26786 command_runner.go:130] >       "uid": {
	I1205 20:03:42.297207   26786 command_runner.go:130] >         "value": "0"
	I1205 20:03:42.297211   26786 command_runner.go:130] >       },
	I1205 20:03:42.297218   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297221   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297228   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297232   26786 command_runner.go:130] >     },
	I1205 20:03:42.297235   26786 command_runner.go:130] >     {
	I1205 20:03:42.297245   26786 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1205 20:03:42.297249   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297255   26786 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1205 20:03:42.297266   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297270   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297280   26786 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1205 20:03:42.297287   26786 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1205 20:03:42.297293   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297298   26786 command_runner.go:130] >       "size": "127226832",
	I1205 20:03:42.297306   26786 command_runner.go:130] >       "uid": {
	I1205 20:03:42.297310   26786 command_runner.go:130] >         "value": "0"
	I1205 20:03:42.297316   26786 command_runner.go:130] >       },
	I1205 20:03:42.297320   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297327   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297331   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297334   26786 command_runner.go:130] >     },
	I1205 20:03:42.297338   26786 command_runner.go:130] >     {
	I1205 20:03:42.297344   26786 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1205 20:03:42.297350   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297355   26786 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1205 20:03:42.297359   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297363   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297373   26786 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1205 20:03:42.297381   26786 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1205 20:03:42.297387   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297391   26786 command_runner.go:130] >       "size": "123261750",
	I1205 20:03:42.297398   26786 command_runner.go:130] >       "uid": {
	I1205 20:03:42.297402   26786 command_runner.go:130] >         "value": "0"
	I1205 20:03:42.297405   26786 command_runner.go:130] >       },
	I1205 20:03:42.297410   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297416   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297420   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297426   26786 command_runner.go:130] >     },
	I1205 20:03:42.297429   26786 command_runner.go:130] >     {
	I1205 20:03:42.297435   26786 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1205 20:03:42.297441   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297446   26786 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1205 20:03:42.297452   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297456   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297463   26786 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1205 20:03:42.297483   26786 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1205 20:03:42.297489   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297494   26786 command_runner.go:130] >       "size": "74749335",
	I1205 20:03:42.297500   26786 command_runner.go:130] >       "uid": null,
	I1205 20:03:42.297504   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297512   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297516   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297522   26786 command_runner.go:130] >     },
	I1205 20:03:42.297525   26786 command_runner.go:130] >     {
	I1205 20:03:42.297531   26786 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1205 20:03:42.297538   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297543   26786 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1205 20:03:42.297549   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297554   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297572   26786 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1205 20:03:42.297582   26786 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1205 20:03:42.297590   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297599   26786 command_runner.go:130] >       "size": "61551410",
	I1205 20:03:42.297603   26786 command_runner.go:130] >       "uid": {
	I1205 20:03:42.297610   26786 command_runner.go:130] >         "value": "0"
	I1205 20:03:42.297616   26786 command_runner.go:130] >       },
	I1205 20:03:42.297621   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297629   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297636   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297639   26786 command_runner.go:130] >     },
	I1205 20:03:42.297645   26786 command_runner.go:130] >     {
	I1205 20:03:42.297652   26786 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1205 20:03:42.297658   26786 command_runner.go:130] >       "repoTags": [
	I1205 20:03:42.297663   26786 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 20:03:42.297669   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297674   26786 command_runner.go:130] >       "repoDigests": [
	I1205 20:03:42.297683   26786 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1205 20:03:42.297696   26786 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1205 20:03:42.297699   26786 command_runner.go:130] >       ],
	I1205 20:03:42.297706   26786 command_runner.go:130] >       "size": "750414",
	I1205 20:03:42.297710   26786 command_runner.go:130] >       "uid": {
	I1205 20:03:42.297717   26786 command_runner.go:130] >         "value": "65535"
	I1205 20:03:42.297720   26786 command_runner.go:130] >       },
	I1205 20:03:42.297728   26786 command_runner.go:130] >       "username": "",
	I1205 20:03:42.297732   26786 command_runner.go:130] >       "spec": null,
	I1205 20:03:42.297739   26786 command_runner.go:130] >       "pinned": false
	I1205 20:03:42.297743   26786 command_runner.go:130] >     }
	I1205 20:03:42.297749   26786 command_runner.go:130] >   ]
	I1205 20:03:42.297753   26786 command_runner.go:130] > }
	I1205 20:03:42.297857   26786 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:03:42.297868   26786 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:03:42.297931   26786 ssh_runner.go:195] Run: crio config
	I1205 20:03:42.350329   26786 command_runner.go:130] ! time="2023-12-05 20:03:42.341678272Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1205 20:03:42.350358   26786 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:03:42.361661   26786 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:03:42.361698   26786 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:03:42.361711   26786 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:03:42.361715   26786 command_runner.go:130] > #
	I1205 20:03:42.361724   26786 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:03:42.361731   26786 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:03:42.361737   26786 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:03:42.361744   26786 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:03:42.361750   26786 command_runner.go:130] > # reload'.
	I1205 20:03:42.361760   26786 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:03:42.361769   26786 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:03:42.361784   26786 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:03:42.361795   26786 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:03:42.361801   26786 command_runner.go:130] > [crio]
	I1205 20:03:42.361812   26786 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:03:42.361822   26786 command_runner.go:130] > # containers images, in this directory.
	I1205 20:03:42.361828   26786 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:03:42.361839   26786 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:03:42.361851   26786 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:03:42.361862   26786 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:03:42.361876   26786 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:03:42.361884   26786 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:03:42.361900   26786 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:03:42.361910   26786 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:03:42.361921   26786 command_runner.go:130] > storage_option = [
	I1205 20:03:42.361929   26786 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:03:42.361935   26786 command_runner.go:130] > ]
	I1205 20:03:42.361942   26786 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:03:42.361949   26786 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:03:42.361958   26786 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:03:42.361968   26786 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:03:42.361982   26786 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:03:42.361990   26786 command_runner.go:130] > # always happen on a node reboot
	I1205 20:03:42.361998   26786 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:03:42.362011   26786 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:03:42.362021   26786 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:03:42.362040   26786 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:03:42.362052   26786 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:03:42.362061   26786 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:03:42.362075   26786 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:03:42.362088   26786 command_runner.go:130] > # internal_wipe = true
	I1205 20:03:42.362098   26786 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:03:42.362111   26786 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:03:42.362121   26786 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:03:42.362216   26786 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:03:42.362232   26786 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:03:42.362242   26786 command_runner.go:130] > [crio.api]
	I1205 20:03:42.362252   26786 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:03:42.362263   26786 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:03:42.362285   26786 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:03:42.362297   26786 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:03:42.362308   26786 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:03:42.362320   26786 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:03:42.362328   26786 command_runner.go:130] > # stream_port = "0"
	I1205 20:03:42.362339   26786 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:03:42.362354   26786 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:03:42.362364   26786 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:03:42.362371   26786 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:03:42.362389   26786 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:03:42.362399   26786 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:03:42.362405   26786 command_runner.go:130] > # minutes.
	I1205 20:03:42.362413   26786 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:03:42.362427   26786 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:03:42.362440   26786 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:03:42.362453   26786 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:03:42.362466   26786 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:03:42.362477   26786 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:03:42.362489   26786 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:03:42.362500   26786 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:03:42.362513   26786 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:03:42.362523   26786 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:03:42.362538   26786 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:03:42.362548   26786 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:03:42.362640   26786 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:03:42.362655   26786 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:03:42.362659   26786 command_runner.go:130] > [crio.runtime]
	I1205 20:03:42.362674   26786 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:03:42.362687   26786 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:03:42.362697   26786 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:03:42.362710   26786 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:03:42.362719   26786 command_runner.go:130] > # default_ulimits = [
	I1205 20:03:42.362728   26786 command_runner.go:130] > # ]
	I1205 20:03:42.362740   26786 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:03:42.362747   26786 command_runner.go:130] > # no_pivot = false
	I1205 20:03:42.362756   26786 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:03:42.362771   26786 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:03:42.362783   26786 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:03:42.362795   26786 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:03:42.362806   26786 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:03:42.362820   26786 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:03:42.362829   26786 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:03:42.362837   26786 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:03:42.362847   26786 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:03:42.362858   26786 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:03:42.362872   26786 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:03:42.362886   26786 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:03:42.362900   26786 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:03:42.362910   26786 command_runner.go:130] > conmon_env = [
	I1205 20:03:42.362922   26786 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:03:42.362929   26786 command_runner.go:130] > ]
	I1205 20:03:42.362935   26786 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:03:42.362946   26786 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:03:42.362959   26786 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:03:42.362966   26786 command_runner.go:130] > # default_env = [
	I1205 20:03:42.362975   26786 command_runner.go:130] > # ]
	I1205 20:03:42.362985   26786 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:03:42.362996   26786 command_runner.go:130] > # selinux = false
	I1205 20:03:42.363008   26786 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:03:42.363021   26786 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:03:42.363030   26786 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:03:42.363038   26786 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:03:42.363047   26786 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:03:42.363065   26786 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:03:42.363080   26786 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:03:42.363087   26786 command_runner.go:130] > # which might increase security.
	I1205 20:03:42.363096   26786 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:03:42.363109   26786 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:03:42.363122   26786 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:03:42.363132   26786 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:03:42.363144   26786 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:03:42.363157   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:42.363174   26786 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:03:42.363186   26786 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:03:42.363200   26786 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:03:42.363210   26786 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:03:42.363223   26786 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:03:42.363230   26786 command_runner.go:130] > # irqbalance daemon.
	I1205 20:03:42.363239   26786 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:03:42.363253   26786 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:03:42.363265   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:42.363278   26786 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:03:42.363290   26786 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:03:42.363301   26786 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:03:42.363312   26786 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:03:42.363319   26786 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:03:42.363333   26786 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:03:42.363347   26786 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:03:42.363357   26786 command_runner.go:130] > # will be added.
	I1205 20:03:42.363367   26786 command_runner.go:130] > # default_capabilities = [
	I1205 20:03:42.363376   26786 command_runner.go:130] > # 	"CHOWN",
	I1205 20:03:42.363386   26786 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:03:42.363396   26786 command_runner.go:130] > # 	"FSETID",
	I1205 20:03:42.363406   26786 command_runner.go:130] > # 	"FOWNER",
	I1205 20:03:42.363413   26786 command_runner.go:130] > # 	"SETGID",
	I1205 20:03:42.363417   26786 command_runner.go:130] > # 	"SETUID",
	I1205 20:03:42.363426   26786 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:03:42.363437   26786 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:03:42.363447   26786 command_runner.go:130] > # 	"KILL",
	I1205 20:03:42.363460   26786 command_runner.go:130] > # ]
	I1205 20:03:42.363473   26786 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:03:42.363486   26786 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:03:42.363496   26786 command_runner.go:130] > # default_sysctls = [
	I1205 20:03:42.363502   26786 command_runner.go:130] > # ]
	I1205 20:03:42.363507   26786 command_runner.go:130] > # List of devices on the host that a
	I1205 20:03:42.363521   26786 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:03:42.363532   26786 command_runner.go:130] > # allowed_devices = [
	I1205 20:03:42.363539   26786 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:03:42.363548   26786 command_runner.go:130] > # ]
	I1205 20:03:42.363559   26786 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:03:42.363574   26786 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:03:42.363586   26786 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:03:42.363630   26786 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:03:42.363641   26786 command_runner.go:130] > # additional_devices = [
	I1205 20:03:42.363647   26786 command_runner.go:130] > # ]
	I1205 20:03:42.363659   26786 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:03:42.363669   26786 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:03:42.363682   26786 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:03:42.363692   26786 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:03:42.363701   26786 command_runner.go:130] > # ]
	I1205 20:03:42.363711   26786 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:03:42.363723   26786 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:03:42.363733   26786 command_runner.go:130] > # Defaults to false.
	I1205 20:03:42.363745   26786 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:03:42.363759   26786 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:03:42.363772   26786 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:03:42.363782   26786 command_runner.go:130] > # hooks_dir = [
	I1205 20:03:42.363793   26786 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:03:42.363803   26786 command_runner.go:130] > # ]
	I1205 20:03:42.363829   26786 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:03:42.363844   26786 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:03:42.363853   26786 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:03:42.363862   26786 command_runner.go:130] > #
	I1205 20:03:42.363872   26786 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:03:42.363886   26786 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:03:42.363902   26786 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:03:42.363910   26786 command_runner.go:130] > #
	I1205 20:03:42.363917   26786 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:03:42.363930   26786 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:03:42.363944   26786 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:03:42.363958   26786 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:03:42.363967   26786 command_runner.go:130] > #
	I1205 20:03:42.363981   26786 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:03:42.363993   26786 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:03:42.364006   26786 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:03:42.364014   26786 command_runner.go:130] > pids_limit = 1024
	I1205 20:03:42.364023   26786 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:03:42.364041   26786 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:03:42.364052   26786 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:03:42.364065   26786 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:03:42.364072   26786 command_runner.go:130] > # log_size_max = -1
	I1205 20:03:42.364084   26786 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:03:42.364092   26786 command_runner.go:130] > # log_to_journald = false
	I1205 20:03:42.364109   26786 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:03:42.364121   26786 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:03:42.364133   26786 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:03:42.364142   26786 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:03:42.364152   26786 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:03:42.364169   26786 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:03:42.364181   26786 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:03:42.364188   26786 command_runner.go:130] > # read_only = false
	I1205 20:03:42.364198   26786 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:03:42.364212   26786 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:03:42.364222   26786 command_runner.go:130] > # live configuration reload.
	I1205 20:03:42.364229   26786 command_runner.go:130] > # log_level = "info"
	I1205 20:03:42.364240   26786 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:03:42.364252   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:42.364263   26786 command_runner.go:130] > # log_filter = ""
	I1205 20:03:42.364276   26786 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:03:42.364290   26786 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:03:42.364300   26786 command_runner.go:130] > # separated by comma.
	I1205 20:03:42.364313   26786 command_runner.go:130] > # uid_mappings = ""
	I1205 20:03:42.364325   26786 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:03:42.364339   26786 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:03:42.364349   26786 command_runner.go:130] > # separated by comma.
	I1205 20:03:42.364357   26786 command_runner.go:130] > # gid_mappings = ""
	I1205 20:03:42.364370   26786 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:03:42.364383   26786 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:03:42.364396   26786 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:03:42.364407   26786 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:03:42.364418   26786 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:03:42.364429   26786 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:03:42.364443   26786 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:03:42.364453   26786 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:03:42.364464   26786 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:03:42.364477   26786 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:03:42.364490   26786 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:03:42.364500   26786 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:03:42.364513   26786 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:03:42.364526   26786 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:03:42.364536   26786 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:03:42.364548   26786 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:03:42.364562   26786 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:03:42.364575   26786 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:03:42.364588   26786 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:03:42.364603   26786 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:03:42.364611   26786 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:03:42.364618   26786 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:03:42.364629   26786 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:03:42.364640   26786 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:03:42.364652   26786 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:03:42.364663   26786 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:03:42.364680   26786 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:03:42.364694   26786 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:03:42.364706   26786 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:03:42.364714   26786 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:03:42.364722   26786 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:03:42.364743   26786 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:03:42.364761   26786 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:03:42.364773   26786 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:03:42.364789   26786 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:03:42.364800   26786 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:03:42.364807   26786 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:03:42.364811   26786 command_runner.go:130] > # ]
	I1205 20:03:42.364825   26786 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:03:42.364840   26786 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:03:42.364853   26786 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:03:42.364866   26786 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:03:42.364875   26786 command_runner.go:130] > #
	I1205 20:03:42.364883   26786 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:03:42.364891   26786 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:03:42.364896   26786 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:03:42.364906   26786 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:03:42.364919   26786 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:03:42.364926   26786 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:03:42.364939   26786 command_runner.go:130] > # Where:
	I1205 20:03:42.364952   26786 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:03:42.364965   26786 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:03:42.364978   26786 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:03:42.364989   26786 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:03:42.364996   26786 command_runner.go:130] > #   in $PATH.
	I1205 20:03:42.365006   26786 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:03:42.365018   26786 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:03:42.365029   26786 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:03:42.365038   26786 command_runner.go:130] > #   state.
	I1205 20:03:42.365052   26786 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:03:42.365064   26786 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:03:42.365077   26786 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:03:42.365087   26786 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:03:42.365097   26786 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:03:42.365112   26786 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:03:42.365124   26786 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:03:42.365138   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:03:42.365157   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:03:42.365174   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:03:42.365187   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:03:42.365197   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:03:42.365211   26786 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:03:42.365225   26786 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:03:42.365239   26786 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:03:42.365256   26786 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:03:42.365267   26786 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:03:42.365278   26786 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:03:42.365287   26786 command_runner.go:130] > runtime_type = "oci"
	I1205 20:03:42.365295   26786 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:03:42.365302   26786 command_runner.go:130] > runtime_config_path = ""
	I1205 20:03:42.365312   26786 command_runner.go:130] > monitor_path = ""
	I1205 20:03:42.365323   26786 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:03:42.365330   26786 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:03:42.365344   26786 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:03:42.365354   26786 command_runner.go:130] > # running containers
	I1205 20:03:42.365368   26786 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:03:42.365382   26786 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:03:42.365473   26786 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:03:42.365487   26786 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:03:42.365492   26786 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:03:42.365498   26786 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:03:42.365509   26786 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:03:42.365521   26786 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:03:42.365537   26786 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:03:42.365548   26786 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:03:42.365562   26786 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:03:42.365572   26786 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:03:42.365583   26786 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:03:42.365599   26786 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:03:42.365615   26786 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:03:42.365628   26786 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:03:42.365646   26786 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:03:42.365661   26786 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:03:42.365678   26786 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:03:42.365690   26786 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:03:42.365700   26786 command_runner.go:130] > # Example:
	I1205 20:03:42.365711   26786 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:03:42.365719   26786 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:03:42.365731   26786 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:03:42.365742   26786 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:03:42.365752   26786 command_runner.go:130] > # cpuset = 0
	I1205 20:03:42.365762   26786 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:03:42.365771   26786 command_runner.go:130] > # Where:
	I1205 20:03:42.365780   26786 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:03:42.365792   26786 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:03:42.365805   26786 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:03:42.365817   26786 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:03:42.365833   26786 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:03:42.365846   26786 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:03:42.365854   26786 command_runner.go:130] > # 
	I1205 20:03:42.365864   26786 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:03:42.365873   26786 command_runner.go:130] > #
	I1205 20:03:42.365883   26786 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:03:42.365897   26786 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:03:42.365908   26786 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:03:42.365921   26786 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:03:42.365934   26786 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:03:42.365943   26786 command_runner.go:130] > [crio.image]
	I1205 20:03:42.365953   26786 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:03:42.365963   26786 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:03:42.365971   26786 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:03:42.365984   26786 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:03:42.365995   26786 command_runner.go:130] > # global_auth_file = ""
	I1205 20:03:42.366009   26786 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:03:42.366022   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:42.366033   26786 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:03:42.366047   26786 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:03:42.366057   26786 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:03:42.366063   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:03:42.366071   26786 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:03:42.366081   26786 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:03:42.366091   26786 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:03:42.366102   26786 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:03:42.366111   26786 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:03:42.366118   26786 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:03:42.366128   26786 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:03:42.366139   26786 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:03:42.366148   26786 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:03:42.366154   26786 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:03:42.366167   26786 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:03:42.366175   26786 command_runner.go:130] > # signature_policy = ""
	I1205 20:03:42.366185   26786 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:03:42.366195   26786 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:03:42.366202   26786 command_runner.go:130] > # changing them here.
	I1205 20:03:42.366212   26786 command_runner.go:130] > # insecure_registries = [
	I1205 20:03:42.366218   26786 command_runner.go:130] > # ]
	I1205 20:03:42.366232   26786 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:03:42.366244   26786 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:03:42.366253   26786 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:03:42.366265   26786 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:03:42.366291   26786 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:03:42.366305   26786 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:03:42.366315   26786 command_runner.go:130] > # CNI plugins.
	I1205 20:03:42.366324   26786 command_runner.go:130] > [crio.network]
	I1205 20:03:42.366337   26786 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:03:42.366349   26786 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:03:42.366364   26786 command_runner.go:130] > # cni_default_network = ""
	I1205 20:03:42.366377   26786 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:03:42.366390   26786 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:03:42.366402   26786 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:03:42.366412   26786 command_runner.go:130] > # plugin_dirs = [
	I1205 20:03:42.366421   26786 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:03:42.366425   26786 command_runner.go:130] > # ]
	I1205 20:03:42.366433   26786 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:03:42.366444   26786 command_runner.go:130] > [crio.metrics]
	I1205 20:03:42.366461   26786 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:03:42.366472   26786 command_runner.go:130] > enable_metrics = true
	I1205 20:03:42.366483   26786 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:03:42.366494   26786 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:03:42.366506   26786 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:03:42.366515   26786 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:03:42.366528   26786 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:03:42.366539   26786 command_runner.go:130] > # metrics_collectors = [
	I1205 20:03:42.366546   26786 command_runner.go:130] > # 	"operations",
	I1205 20:03:42.366558   26786 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:03:42.366569   26786 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:03:42.366579   26786 command_runner.go:130] > # 	"operations_errors",
	I1205 20:03:42.366590   26786 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:03:42.366600   26786 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:03:42.366609   26786 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:03:42.366618   26786 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:03:42.366629   26786 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:03:42.366640   26786 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:03:42.366651   26786 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:03:42.366661   26786 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:03:42.366671   26786 command_runner.go:130] > # 	"containers_oom",
	I1205 20:03:42.366682   26786 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:03:42.366692   26786 command_runner.go:130] > # 	"operations_total",
	I1205 20:03:42.366702   26786 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:03:42.366711   26786 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:03:42.366719   26786 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:03:42.366730   26786 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:03:42.366741   26786 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:03:42.366753   26786 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:03:42.366764   26786 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:03:42.366774   26786 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:03:42.366785   26786 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:03:42.366794   26786 command_runner.go:130] > # ]
	I1205 20:03:42.366806   26786 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:03:42.366813   26786 command_runner.go:130] > # metrics_port = 9090
	I1205 20:03:42.366820   26786 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:03:42.366835   26786 command_runner.go:130] > # metrics_socket = ""
	I1205 20:03:42.366848   26786 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:03:42.366858   26786 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:03:42.366872   26786 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:03:42.366883   26786 command_runner.go:130] > # certificate on any modification event.
	I1205 20:03:42.366893   26786 command_runner.go:130] > # metrics_cert = ""
	I1205 20:03:42.366902   26786 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:03:42.366913   26786 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:03:42.366920   26786 command_runner.go:130] > # metrics_key = ""
	I1205 20:03:42.366932   26786 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:03:42.366942   26786 command_runner.go:130] > [crio.tracing]
	I1205 20:03:42.366953   26786 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:03:42.366963   26786 command_runner.go:130] > # enable_tracing = false
	I1205 20:03:42.366975   26786 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:03:42.366986   26786 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:03:42.366998   26786 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:03:42.367009   26786 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:03:42.367018   26786 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:03:42.367030   26786 command_runner.go:130] > [crio.stats]
	I1205 20:03:42.367044   26786 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:03:42.367057   26786 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:03:42.367067   26786 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:03:42.367178   26786 cni.go:84] Creating CNI manager for ""
	I1205 20:03:42.367194   26786 cni.go:136] 1 nodes found, recommending kindnet
	I1205 20:03:42.367216   26786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:03:42.367244   26786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-558947 NodeName:multinode-558947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:03:42.367389   26786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-558947"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:03:42.367493   26786 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-558947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:03:42.367559   26786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:03:42.377822   26786 command_runner.go:130] > kubeadm
	I1205 20:03:42.377846   26786 command_runner.go:130] > kubectl
	I1205 20:03:42.377853   26786 command_runner.go:130] > kubelet
	I1205 20:03:42.377892   26786 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:03:42.377962   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:03:42.387789   26786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 20:03:42.405782   26786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:03:42.422799   26786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1205 20:03:42.439358   26786 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I1205 20:03:42.443484   26786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:03:42.455487   26786 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947 for IP: 192.168.39.3
	I1205 20:03:42.455519   26786 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.455682   26786 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:03:42.455757   26786 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:03:42.455836   26786 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key
	I1205 20:03:42.455855   26786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt with IP's: []
	I1205 20:03:42.577387   26786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt ...
	I1205 20:03:42.577417   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt: {Name:mk9d4f9ec40a110a9fd7ecd821dbbf3fe366d2d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.577586   26786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key ...
	I1205 20:03:42.577597   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key: {Name:mkeeedd2bb895ae8560bb35b318d78bc247fcbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.577674   26786 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key.599d509e
	I1205 20:03:42.577688   26786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt.599d509e with IP's: [192.168.39.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 20:03:42.771835   26786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt.599d509e ...
	I1205 20:03:42.771866   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt.599d509e: {Name:mk00e0f0bfa601e5b83b0a675b67afc0c51c984f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.772029   26786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key.599d509e ...
	I1205 20:03:42.772043   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key.599d509e: {Name:mke6f1375807f5d2ce9c7fcdf7cf80500211d1d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.772109   26786 certs.go:337] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt.599d509e -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt
	I1205 20:03:42.772205   26786 certs.go:341] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key.599d509e -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key
	I1205 20:03:42.772259   26786 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key
	I1205 20:03:42.772270   26786 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt with IP's: []
	I1205 20:03:42.874751   26786 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt ...
	I1205 20:03:42.874781   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt: {Name:mkad4a221e74085061665defff8afe63dc48b1b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.874942   26786 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key ...
	I1205 20:03:42.874956   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key: {Name:mkdf0424297025af0199051c620be9d1713e9c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:03:42.875029   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:03:42.875048   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:03:42.875058   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:03:42.875070   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:03:42.875079   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:03:42.875089   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:03:42.875099   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:03:42.875109   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:03:42.875173   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:03:42.875207   26786 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:03:42.875224   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:03:42.875248   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:03:42.875269   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:03:42.875293   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:03:42.875334   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:03:42.875359   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /usr/share/ca-certificates/134102.pem
	I1205 20:03:42.875372   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:42.875383   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem -> /usr/share/ca-certificates/13410.pem
	I1205 20:03:42.875957   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:03:42.900649   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:03:42.924296   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:03:42.947879   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:03:42.969683   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:03:42.992432   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:03:43.015026   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:03:43.040515   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:03:43.063048   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:03:43.085610   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:03:43.108930   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:03:43.131531   26786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:03:43.148079   26786 ssh_runner.go:195] Run: openssl version
	I1205 20:03:43.154012   26786 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1205 20:03:43.154083   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:03:43.165009   26786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:03:43.169973   26786 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:03:43.170007   26786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:03:43.170055   26786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:03:43.175868   26786 command_runner.go:130] > 3ec20f2e
	I1205 20:03:43.175956   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:03:43.187008   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:03:43.198448   26786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:43.203398   26786 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:43.203432   26786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:43.203472   26786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:03:43.208897   26786 command_runner.go:130] > b5213941
	I1205 20:03:43.209156   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:03:43.220356   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:03:43.231403   26786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:03:43.236003   26786 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:03:43.236032   26786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:03:43.236076   26786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:03:43.241638   26786 command_runner.go:130] > 51391683
	I1205 20:03:43.241765   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:03:43.252871   26786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:03:43.257430   26786 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:03:43.257660   26786 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:03:43.257714   26786 kubeadm.go:404] StartCluster: {Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:03:43.257797   26786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:03:43.257837   26786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:03:43.305497   26786 cri.go:89] found id: ""
	I1205 20:03:43.305561   26786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:03:43.317273   26786 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1205 20:03:43.317301   26786 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1205 20:03:43.317310   26786 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1205 20:03:43.317622   26786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:03:43.329292   26786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:03:43.339549   26786 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1205 20:03:43.339578   26786 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1205 20:03:43.339590   26786 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1205 20:03:43.339601   26786 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:03:43.339640   26786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:03:43.339680   26786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:03:43.712224   26786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:03:43.712252   26786 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:03:56.183448   26786 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:03:56.183493   26786 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1205 20:03:56.183561   26786 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:03:56.183579   26786 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:03:56.183661   26786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:03:56.183678   26786 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:03:56.183825   26786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:03:56.183836   26786 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:03:56.183951   26786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:03:56.183970   26786 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:03:56.184052   26786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:03:56.185833   26786 out.go:204]   - Generating certificates and keys ...
	I1205 20:03:56.184157   26786 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:03:56.185914   26786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:03:56.185930   26786 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1205 20:03:56.186028   26786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:03:56.186040   26786 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1205 20:03:56.186138   26786 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:03:56.186149   26786 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:03:56.186229   26786 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:03:56.186239   26786 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:03:56.186343   26786 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:03:56.186363   26786 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1205 20:03:56.186445   26786 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 20:03:56.186453   26786 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1205 20:03:56.186515   26786 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 20:03:56.186528   26786 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1205 20:03:56.186645   26786 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-558947] and IPs [192.168.39.3 127.0.0.1 ::1]
	I1205 20:03:56.186660   26786 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-558947] and IPs [192.168.39.3 127.0.0.1 ::1]
	I1205 20:03:56.186791   26786 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 20:03:56.186814   26786 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1205 20:03:56.186965   26786 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-558947] and IPs [192.168.39.3 127.0.0.1 ::1]
	I1205 20:03:56.186977   26786 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-558947] and IPs [192.168.39.3 127.0.0.1 ::1]
	I1205 20:03:56.187061   26786 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:03:56.187080   26786 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:03:56.187141   26786 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:03:56.187148   26786 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:03:56.187183   26786 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 20:03:56.187200   26786 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1205 20:03:56.187292   26786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:03:56.187302   26786 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:03:56.187385   26786 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:03:56.187396   26786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:03:56.187473   26786 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:03:56.187483   26786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:03:56.187568   26786 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:03:56.187584   26786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:03:56.187637   26786 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:03:56.187642   26786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:03:56.187710   26786 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:03:56.187715   26786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:03:56.187767   26786 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:03:56.187772   26786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:03:56.189402   26786 out.go:204]   - Booting up control plane ...
	I1205 20:03:56.189479   26786 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:03:56.189487   26786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:03:56.189553   26786 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:03:56.189560   26786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:03:56.189615   26786 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:03:56.189621   26786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:03:56.189701   26786 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:03:56.189707   26786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:03:56.189780   26786 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:03:56.189786   26786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:03:56.189819   26786 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:03:56.189825   26786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:03:56.189963   26786 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:03:56.189974   26786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:03:56.190058   26786 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.503085 seconds
	I1205 20:03:56.190068   26786 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503085 seconds
	I1205 20:03:56.190170   26786 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:03:56.190177   26786 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:03:56.190320   26786 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:03:56.190338   26786 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:03:56.190414   26786 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:03:56.190424   26786 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:03:56.190633   26786 command_runner.go:130] > [mark-control-plane] Marking the node multinode-558947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:03:56.190645   26786 kubeadm.go:322] [mark-control-plane] Marking the node multinode-558947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:03:56.190689   26786 command_runner.go:130] > [bootstrap-token] Using token: lsa16b.gfdpyd4updljzmdh
	I1205 20:03:56.190704   26786 kubeadm.go:322] [bootstrap-token] Using token: lsa16b.gfdpyd4updljzmdh
	I1205 20:03:56.192332   26786 out.go:204]   - Configuring RBAC rules ...
	I1205 20:03:56.192442   26786 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:03:56.192455   26786 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:03:56.192537   26786 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:03:56.192548   26786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:03:56.192725   26786 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:03:56.192753   26786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:03:56.192876   26786 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:03:56.192890   26786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:03:56.193037   26786 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:03:56.193047   26786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:03:56.193167   26786 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:03:56.193175   26786 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:03:56.193304   26786 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:03:56.193313   26786 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:03:56.193388   26786 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1205 20:03:56.193396   26786 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:03:56.193463   26786 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1205 20:03:56.193470   26786 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:03:56.193473   26786 kubeadm.go:322] 
	I1205 20:03:56.193521   26786 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1205 20:03:56.193526   26786 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:03:56.193529   26786 kubeadm.go:322] 
	I1205 20:03:56.193622   26786 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1205 20:03:56.193630   26786 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:03:56.193636   26786 kubeadm.go:322] 
	I1205 20:03:56.193670   26786 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1205 20:03:56.193678   26786 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:03:56.193757   26786 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:03:56.193765   26786 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:03:56.193835   26786 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:03:56.193844   26786 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:03:56.193851   26786 kubeadm.go:322] 
	I1205 20:03:56.193926   26786 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1205 20:03:56.193934   26786 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:03:56.193940   26786 kubeadm.go:322] 
	I1205 20:03:56.194016   26786 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:03:56.194030   26786 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:03:56.194041   26786 kubeadm.go:322] 
	I1205 20:03:56.194095   26786 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1205 20:03:56.194103   26786 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:03:56.194196   26786 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:03:56.194210   26786 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:03:56.194335   26786 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:03:56.194344   26786 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:03:56.194350   26786 kubeadm.go:322] 
	I1205 20:03:56.194458   26786 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:03:56.194469   26786 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:03:56.194571   26786 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1205 20:03:56.194575   26786 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:03:56.194595   26786 kubeadm.go:322] 
	I1205 20:03:56.194689   26786 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token lsa16b.gfdpyd4updljzmdh \
	I1205 20:03:56.194698   26786 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lsa16b.gfdpyd4updljzmdh \
	I1205 20:03:56.194809   26786 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:03:56.194818   26786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:03:56.194845   26786 command_runner.go:130] > 	--control-plane 
	I1205 20:03:56.194854   26786 kubeadm.go:322] 	--control-plane 
	I1205 20:03:56.194861   26786 kubeadm.go:322] 
	I1205 20:03:56.194978   26786 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:03:56.194988   26786 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:03:56.194992   26786 kubeadm.go:322] 
	I1205 20:03:56.195055   26786 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lsa16b.gfdpyd4updljzmdh \
	I1205 20:03:56.195061   26786 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lsa16b.gfdpyd4updljzmdh \
	I1205 20:03:56.195170   26786 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:03:56.195188   26786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:03:56.195198   26786 cni.go:84] Creating CNI manager for ""
	I1205 20:03:56.195208   26786 cni.go:136] 1 nodes found, recommending kindnet
	I1205 20:03:56.197152   26786 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:03:56.198520   26786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:03:56.217080   26786 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:03:56.217102   26786 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1205 20:03:56.217109   26786 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1205 20:03:56.217118   26786 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:03:56.217124   26786 command_runner.go:130] > Access: 2023-12-05 20:03:24.065701117 +0000
	I1205 20:03:56.217129   26786 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1205 20:03:56.217134   26786 command_runner.go:130] > Change: 2023-12-05 20:03:22.189701117 +0000
	I1205 20:03:56.217141   26786 command_runner.go:130] >  Birth: -
	I1205 20:03:56.219081   26786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:03:56.219099   26786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:03:56.304418   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:03:57.250315   26786 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1205 20:03:57.258188   26786 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1205 20:03:57.268634   26786 command_runner.go:130] > serviceaccount/kindnet created
	I1205 20:03:57.311382   26786 command_runner.go:130] > daemonset.apps/kindnet created
	I1205 20:03:57.314375   26786 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.009921895s)
	I1205 20:03:57.314429   26786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:03:57.314527   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:57.314565   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-558947 minikube.k8s.io/updated_at=2023_12_05T20_03_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:57.354069   26786 command_runner.go:130] > -16
	I1205 20:03:57.354138   26786 ops.go:34] apiserver oom_adj: -16
	I1205 20:03:57.510287   26786 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1205 20:03:57.512856   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:57.524927   26786 command_runner.go:130] > node/multinode-558947 labeled
	I1205 20:03:57.615261   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:03:57.617004   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:57.720878   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:03:58.221723   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:58.308635   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:03:58.721693   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:58.812457   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:03:59.221647   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:59.311946   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:03:59.721490   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:03:59.806003   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:00.222062   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:00.313027   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:00.721965   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:00.803920   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:01.221455   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.324246   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:01.721292   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:01.809641   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:02.221135   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:02.312433   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:02.721682   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:02.810874   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:03.221053   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:03.304559   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:03.721100   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:03.798985   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:04.221247   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:04.314645   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:04.721189   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:04.816369   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:05.221400   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:05.334197   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:05.721999   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:05.809558   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:06.222049   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:06.323380   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:06.721086   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:06.818223   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:07.221142   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:07.312127   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:07.721649   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:07.812287   26786 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1205 20:04:08.221930   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:08.331394   26786 command_runner.go:130] > NAME      SECRETS   AGE
	I1205 20:04:08.331422   26786 command_runner.go:130] > default   0         0s
	I1205 20:04:08.331449   26786 kubeadm.go:1088] duration metric: took 11.017016827s to wait for elevateKubeSystemPrivileges.
	I1205 20:04:08.331470   26786 kubeadm.go:406] StartCluster complete in 25.07375859s
	I1205 20:04:08.331491   26786 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:04:08.331581   26786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:04:08.332243   26786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:04:08.332451   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:04:08.332568   26786 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:04:08.332657   26786 addons.go:69] Setting storage-provisioner=true in profile "multinode-558947"
	I1205 20:04:08.332682   26786 addons.go:231] Setting addon storage-provisioner=true in "multinode-558947"
	I1205 20:04:08.332679   26786 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:08.332702   26786 addons.go:69] Setting default-storageclass=true in profile "multinode-558947"
	I1205 20:04:08.332728   26786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-558947"
	I1205 20:04:08.332730   26786 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:04:08.332787   26786 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:04:08.333139   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:08.333153   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:08.333099   26786 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:08.333172   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:08.333176   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:08.333809   26786 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 20:04:08.334056   26786 round_trippers.go:463] GET https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:08.334069   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:08.334077   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:08.334082   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:08.349024   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I1205 20:04:08.349409   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:08.350025   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:08.350069   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:08.350417   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:08.350920   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:08.350978   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:08.351608   26786 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1205 20:04:08.351628   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:08.351637   26786 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:08.351647   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:08 GMT
	I1205 20:04:08.351659   26786 round_trippers.go:580]     Audit-Id: 59236b5f-b37c-4105-9499-f91b68f74af9
	I1205 20:04:08.351669   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:08.351678   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:08.351703   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:08.351717   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:08.351760   26786 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"261","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:08.351682   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I1205 20:04:08.352188   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:08.352308   26786 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"261","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:08.352370   26786 round_trippers.go:463] PUT https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:08.352384   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:08.352392   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:08.352398   26786 round_trippers.go:473]     Content-Type: application/json
	I1205 20:04:08.352409   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:08.352700   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:08.352724   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:08.353064   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:08.353260   26786 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:04:08.355555   26786 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:04:08.355892   26786 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:08.356209   26786 addons.go:231] Setting addon default-storageclass=true in "multinode-558947"
	I1205 20:04:08.356247   26786 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:04:08.356662   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:08.356710   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:08.362618   26786 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 20:04:08.362640   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:08.362650   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:08.362658   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:08.362665   26786 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:08.362673   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:08 GMT
	I1205 20:04:08.362681   26786 round_trippers.go:580]     Audit-Id: 46987ae4-94fd-44d0-af3a-fa300e63ff2c
	I1205 20:04:08.362689   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:08.362707   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:08.362842   26786 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"346","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:08.363014   26786 round_trippers.go:463] GET https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:08.363030   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:08.363041   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:08.363052   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:08.366729   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:08.366755   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:08.366766   26786 round_trippers.go:580]     Audit-Id: fcb1f256-215b-459e-8d53-7efedc9c361c
	I1205 20:04:08.366774   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:08.366782   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:08.366790   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:08.366798   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:08.366807   26786 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:08.366819   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:08 GMT
	I1205 20:04:08.366846   26786 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"346","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:08.366952   26786 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-558947" context rescaled to 1 replicas
	I1205 20:04:08.366987   26786 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:04:08.370352   26786 out.go:177] * Verifying Kubernetes components...
	I1205 20:04:08.366732   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I1205 20:04:08.370767   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:08.371659   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1205 20:04:08.371843   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:08.372326   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:08.372548   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:08.372576   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:08.372755   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:08.372773   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:08.372967   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:08.373066   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:08.373140   26786 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:04:08.373613   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:08.373665   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:08.374876   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:04:08.376636   26786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:04:08.378156   26786 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:04:08.378174   26786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:04:08.378188   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:04:08.381180   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:04:08.381540   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:04:08.381560   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:04:08.381713   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:04:08.381882   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:04:08.381983   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:04:08.382104   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:04:08.388602   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I1205 20:04:08.388953   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:08.389361   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:08.389382   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:08.389730   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:08.389893   26786 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:04:08.391281   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:04:08.391500   26786 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:04:08.391514   26786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:04:08.391528   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:04:08.393885   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:04:08.394193   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:04:08.394218   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:04:08.394398   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:04:08.394565   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:04:08.394720   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:04:08.394851   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:04:08.491956   26786 command_runner.go:130] > apiVersion: v1
	I1205 20:04:08.491984   26786 command_runner.go:130] > data:
	I1205 20:04:08.491991   26786 command_runner.go:130] >   Corefile: |
	I1205 20:04:08.491997   26786 command_runner.go:130] >     .:53 {
	I1205 20:04:08.492004   26786 command_runner.go:130] >         errors
	I1205 20:04:08.492010   26786 command_runner.go:130] >         health {
	I1205 20:04:08.492016   26786 command_runner.go:130] >            lameduck 5s
	I1205 20:04:08.492020   26786 command_runner.go:130] >         }
	I1205 20:04:08.492024   26786 command_runner.go:130] >         ready
	I1205 20:04:08.492030   26786 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1205 20:04:08.492034   26786 command_runner.go:130] >            pods insecure
	I1205 20:04:08.492039   26786 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1205 20:04:08.492044   26786 command_runner.go:130] >            ttl 30
	I1205 20:04:08.492047   26786 command_runner.go:130] >         }
	I1205 20:04:08.492051   26786 command_runner.go:130] >         prometheus :9153
	I1205 20:04:08.492062   26786 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1205 20:04:08.492066   26786 command_runner.go:130] >            max_concurrent 1000
	I1205 20:04:08.492075   26786 command_runner.go:130] >         }
	I1205 20:04:08.492087   26786 command_runner.go:130] >         cache 30
	I1205 20:04:08.492094   26786 command_runner.go:130] >         loop
	I1205 20:04:08.492100   26786 command_runner.go:130] >         reload
	I1205 20:04:08.492108   26786 command_runner.go:130] >         loadbalance
	I1205 20:04:08.492112   26786 command_runner.go:130] >     }
	I1205 20:04:08.492116   26786 command_runner.go:130] > kind: ConfigMap
	I1205 20:04:08.492119   26786 command_runner.go:130] > metadata:
	I1205 20:04:08.492128   26786 command_runner.go:130] >   creationTimestamp: "2023-12-05T20:03:55Z"
	I1205 20:04:08.492132   26786 command_runner.go:130] >   name: coredns
	I1205 20:04:08.492137   26786 command_runner.go:130] >   namespace: kube-system
	I1205 20:04:08.492144   26786 command_runner.go:130] >   resourceVersion: "257"
	I1205 20:04:08.492148   26786 command_runner.go:130] >   uid: 91b078ea-72c0-4b91-95c4-879eb6cb01d7
	I1205 20:04:08.493604   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:04:08.493845   26786 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:04:08.494131   26786 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:08.494403   26786 node_ready.go:35] waiting up to 6m0s for node "multinode-558947" to be "Ready" ...
	I1205 20:04:08.494492   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:08.494501   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:08.494508   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:08.494514   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:08.496912   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:08.496928   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:08.496935   26786 round_trippers.go:580]     Audit-Id: 43510c75-f420-4072-b08e-f8cfa0a55316
	I1205 20:04:08.496943   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:08.496951   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:08.496963   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:08.496972   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:08.496980   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:08 GMT
	I1205 20:04:08.497102   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"331","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:0
3:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5987 chars]
	I1205 20:04:08.497670   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:08.497683   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:08.497690   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:08.497696   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:08.500033   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:08.500056   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:08.500065   26786 round_trippers.go:580]     Audit-Id: b467ce11-8b8e-42c4-a143-e9c4c00d98ab
	I1205 20:04:08.500074   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:08.500081   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:08.500089   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:08.500100   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:08.500109   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:08 GMT
	I1205 20:04:08.500250   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"331","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:0
3:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5987 chars]
	I1205 20:04:08.589983   26786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:04:08.604819   26786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:04:09.001505   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:09.001529   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:09.001540   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:09.001547   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:09.023015   26786 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1205 20:04:09.023042   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:09.023049   26786 round_trippers.go:580]     Audit-Id: 67a5512e-eb3d-4e75-9991-49b270f12979
	I1205 20:04:09.023055   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:09.023060   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:09.023065   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:09.023070   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:09.023076   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:09 GMT
	I1205 20:04:09.024030   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:09.361043   26786 command_runner.go:130] > configmap/coredns replaced
	I1205 20:04:09.363421   26786 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:04:09.363435   26786 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1205 20:04:09.363488   26786 main.go:141] libmachine: Making call to close driver server
	I1205 20:04:09.363505   26786 main.go:141] libmachine: (multinode-558947) Calling .Close
	I1205 20:04:09.363840   26786 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:04:09.363861   26786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:04:09.363873   26786 main.go:141] libmachine: Making call to close driver server
	I1205 20:04:09.363885   26786 main.go:141] libmachine: (multinode-558947) Calling .Close
	I1205 20:04:09.364152   26786 main.go:141] libmachine: (multinode-558947) DBG | Closing plugin on server side
	I1205 20:04:09.364161   26786 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:04:09.364177   26786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:04:09.364283   26786 round_trippers.go:463] GET https://192.168.39.3:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:04:09.364295   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:09.364305   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:09.364315   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:09.374794   26786 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1205 20:04:09.374814   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:09.374823   26786 round_trippers.go:580]     Audit-Id: 83644a3b-d7f4-4865-acc2-1e2890a0b688
	I1205 20:04:09.374829   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:09.374837   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:09.374844   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:09.374852   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:09.374865   26786 round_trippers.go:580]     Content-Length: 1273
	I1205 20:04:09.374874   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:09 GMT
	I1205 20:04:09.375070   26786 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"standard","uid":"c1bd291e-75ab-43e3-b008-35bf72eaee01","resourceVersion":"392","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1205 20:04:09.375582   26786 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c1bd291e-75ab-43e3-b008-35bf72eaee01","resourceVersion":"392","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1205 20:04:09.375658   26786 round_trippers.go:463] PUT https://192.168.39.3:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:04:09.375669   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:09.375676   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:09.375682   26786 round_trippers.go:473]     Content-Type: application/json
	I1205 20:04:09.375690   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:09.379479   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:09.379494   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:09.379503   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:09.379511   26786 round_trippers.go:580]     Content-Length: 1220
	I1205 20:04:09.379520   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:09 GMT
	I1205 20:04:09.379532   26786 round_trippers.go:580]     Audit-Id: d05e2c38-e85a-4f91-aa92-b6bc033a60e3
	I1205 20:04:09.379544   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:09.379556   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:09.379569   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:09.379608   26786 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c1bd291e-75ab-43e3-b008-35bf72eaee01","resourceVersion":"392","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1205 20:04:09.379733   26786 main.go:141] libmachine: Making call to close driver server
	I1205 20:04:09.379750   26786 main.go:141] libmachine: (multinode-558947) Calling .Close
	I1205 20:04:09.379956   26786 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:04:09.379977   26786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:04:09.501717   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:09.501743   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:09.501755   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:09.501763   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:09.509844   26786 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1205 20:04:09.509866   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:09.509873   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:09.509879   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:09 GMT
	I1205 20:04:09.509884   26786 round_trippers.go:580]     Audit-Id: c478250a-1af1-4e6c-8624-27697119f82d
	I1205 20:04:09.509889   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:09.509895   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:09.509899   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:09.510107   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:09.627013   26786 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1205 20:04:09.627044   26786 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1205 20:04:09.627055   26786 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1205 20:04:09.627075   26786 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1205 20:04:09.627092   26786 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1205 20:04:09.627100   26786 command_runner.go:130] > pod/storage-provisioner created
	I1205 20:04:09.627123   26786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.022277606s)
	I1205 20:04:09.627175   26786 main.go:141] libmachine: Making call to close driver server
	I1205 20:04:09.627186   26786 main.go:141] libmachine: (multinode-558947) Calling .Close
	I1205 20:04:09.627461   26786 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:04:09.627479   26786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:04:09.627507   26786 main.go:141] libmachine: (multinode-558947) DBG | Closing plugin on server side
	I1205 20:04:09.627558   26786 main.go:141] libmachine: Making call to close driver server
	I1205 20:04:09.627579   26786 main.go:141] libmachine: (multinode-558947) Calling .Close
	I1205 20:04:09.627811   26786 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:04:09.627852   26786 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:04:09.627832   26786 main.go:141] libmachine: (multinode-558947) DBG | Closing plugin on server side
	I1205 20:04:09.630968   26786 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 20:04:09.632426   26786 addons.go:502] enable addons completed in 1.299857152s: enabled=[default-storageclass storage-provisioner]
	I1205 20:04:10.001770   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:10.001806   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:10.001824   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:10.001833   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:10.005689   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:10.005722   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:10.005733   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:10.005741   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:10.005751   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:10.005758   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:10 GMT
	I1205 20:04:10.005767   26786 round_trippers.go:580]     Audit-Id: 14d4134c-080c-4702-9646-4c624f08cb6e
	I1205 20:04:10.005774   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:10.005947   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:10.501588   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:10.501613   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:10.501621   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:10.501627   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:10.504310   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:10.504332   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:10.504342   26786 round_trippers.go:580]     Audit-Id: 6abaf6d4-df5a-45c6-8409-e7765b8eab8a
	I1205 20:04:10.504348   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:10.504353   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:10.504358   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:10.504363   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:10.504368   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:10 GMT
	I1205 20:04:10.504525   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:10.504965   26786 node_ready.go:58] node "multinode-558947" has status "Ready":"False"
	I1205 20:04:11.001528   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:11.001549   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:11.001557   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:11.001563   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:11.004132   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:11.004152   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:11.004162   26786 round_trippers.go:580]     Audit-Id: 7a34a3a7-ac74-47b1-940c-6cd82e35aad8
	I1205 20:04:11.004168   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:11.004173   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:11.004179   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:11.004185   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:11.004197   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:11 GMT
	I1205 20:04:11.004388   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:11.500988   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:11.501017   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:11.501025   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:11.501031   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:11.503828   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:11.503856   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:11.503882   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:11.503890   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:11.503897   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:11.503908   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:11.503915   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:11 GMT
	I1205 20:04:11.503923   26786 round_trippers.go:580]     Audit-Id: 28b2318e-fdae-444b-a42b-4e41ba538e5e
	I1205 20:04:11.504222   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:12.000842   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:12.000868   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:12.000876   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:12.000882   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:12.003732   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:12.003758   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:12.003765   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:12.003770   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:12.003775   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:12 GMT
	I1205 20:04:12.003781   26786 round_trippers.go:580]     Audit-Id: 2c9b5110-33ed-409c-8d0e-ffe55cc5411a
	I1205 20:04:12.003793   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:12.003799   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:12.004399   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:12.501007   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:12.501041   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:12.501049   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:12.501055   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:12.503635   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:12.503658   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:12.503665   26786 round_trippers.go:580]     Audit-Id: 9bc2f72a-6f44-4b43-9347-2f391bbaad6c
	I1205 20:04:12.503670   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:12.503676   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:12.503681   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:12.503686   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:12.503690   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:12 GMT
	I1205 20:04:12.504002   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:13.001654   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:13.001684   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:13.001692   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:13.001698   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:13.004095   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:13.004115   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:13.004137   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:13.004143   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:13 GMT
	I1205 20:04:13.004148   26786 round_trippers.go:580]     Audit-Id: cdeba7ec-9c11-4c99-8183-a33ce0a3a5a4
	I1205 20:04:13.004154   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:13.004160   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:13.004165   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:13.004627   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:13.004913   26786 node_ready.go:58] node "multinode-558947" has status "Ready":"False"
	I1205 20:04:13.501775   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:13.501799   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:13.501807   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:13.501813   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:13.505207   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:13.505226   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:13.505233   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:13.505238   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:13.505243   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:13 GMT
	I1205 20:04:13.505253   26786 round_trippers.go:580]     Audit-Id: 9060ae31-1fc7-4af7-b3f1-36222796fbbc
	I1205 20:04:13.505261   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:13.505269   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:13.505528   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:14.001162   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:14.001189   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.001197   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.001203   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.003563   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.003584   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.003591   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.003596   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.003601   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.003606   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.003611   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.003616   26786 round_trippers.go:580]     Audit-Id: 11770e73-f23a-44cb-84a2-213b3a0fdf0a
	I1205 20:04:14.004076   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"359","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6091 chars]
	I1205 20:04:14.501774   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:14.501803   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.501811   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.501817   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.504777   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.504799   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.504808   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.504817   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.504824   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.504831   26786 round_trippers.go:580]     Audit-Id: e47136b5-f930-4504-acf0-a8ea66d49eb8
	I1205 20:04:14.504839   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.504847   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.505121   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:14.505410   26786 node_ready.go:49] node "multinode-558947" has status "Ready":"True"
	I1205 20:04:14.505424   26786 node_ready.go:38] duration metric: took 6.010996517s waiting for node "multinode-558947" to be "Ready" ...
	I1205 20:04:14.505431   26786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:04:14.505498   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:14.505514   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.505523   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.505531   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.508663   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:14.508681   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.508689   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.508697   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.508704   26786 round_trippers.go:580]     Audit-Id: 3cd35d94-0ecd-4993-938d-baeb1b24e2c4
	I1205 20:04:14.508711   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.508718   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.508736   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.509976   26786 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54738 chars]
	I1205 20:04:14.514721   26786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:14.514789   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:14.514798   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.514805   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.514811   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.517145   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.517170   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.517179   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.517188   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.517195   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.517201   26786 round_trippers.go:580]     Audit-Id: def2dcc3-05d9-42db-b771-5e1e85511088
	I1205 20:04:14.517207   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.517212   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.517392   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:14.517805   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:14.517821   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.517832   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.517840   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.519905   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.519924   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.519933   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.519940   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.519946   26786 round_trippers.go:580]     Audit-Id: 98269bcb-a301-4b9b-aa04-174abc19f426
	I1205 20:04:14.519951   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.519957   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.519962   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.520119   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:14.520451   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:14.520465   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.520472   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.520478   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.522597   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:14.522613   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.522622   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.522631   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.522639   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.522648   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.522658   26786 round_trippers.go:580]     Audit-Id: 2020fa5d-ea3b-4edd-9f8e-5c15f6799270
	I1205 20:04:14.522678   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.523000   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:14.523359   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:14.523372   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:14.523383   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:14.523391   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:14.525192   26786 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:14.525206   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:14.525215   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:14.525223   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:14 GMT
	I1205 20:04:14.525231   26786 round_trippers.go:580]     Audit-Id: 3c1beb6f-1c59-4004-939f-5d28468b928f
	I1205 20:04:14.525239   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:14.525256   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:14.525265   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:14.525523   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:15.026409   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:15.026442   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:15.026453   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:15.026462   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:15.030746   26786 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:15.030773   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:15.030784   26786 round_trippers.go:580]     Audit-Id: 54583564-d7ed-43b0-b186-a3f8313ead73
	I1205 20:04:15.030793   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:15.030802   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:15.030809   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:15.030814   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:15.030820   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:15 GMT
	I1205 20:04:15.031452   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:15.032029   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:15.032048   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:15.032059   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:15.032070   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:15.037605   26786 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:04:15.037628   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:15.037641   26786 round_trippers.go:580]     Audit-Id: 977cf1e5-f59e-4c3d-a6a7-d869cafdd57a
	I1205 20:04:15.037648   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:15.037656   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:15.037664   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:15.037672   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:15.037684   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:15 GMT
	I1205 20:04:15.038206   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:15.526905   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:15.526927   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:15.526935   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:15.526941   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:15.529824   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:15.529844   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:15.529850   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:15 GMT
	I1205 20:04:15.529856   26786 round_trippers.go:580]     Audit-Id: 3af8b7ed-5c8f-48c3-afdf-036123c417b1
	I1205 20:04:15.529865   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:15.529870   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:15.529875   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:15.529880   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:15.530053   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:15.530535   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:15.530551   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:15.530561   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:15.530570   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:15.532754   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:15.532773   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:15.532781   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:15 GMT
	I1205 20:04:15.532788   26786 round_trippers.go:580]     Audit-Id: 387c10f6-7242-4b9f-95b1-2fe5dd6a12ce
	I1205 20:04:15.532796   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:15.532803   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:15.532812   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:15.532826   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:15.532984   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:16.026425   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:16.026450   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:16.026458   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:16.026464   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:16.029570   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:16.029602   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:16.029612   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:16 GMT
	I1205 20:04:16.029621   26786 round_trippers.go:580]     Audit-Id: 3ecd548b-2640-4db5-8ba4-1142c5c31de8
	I1205 20:04:16.029629   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:16.029637   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:16.029644   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:16.029652   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:16.029884   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:16.030477   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:16.030508   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:16.030522   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:16.030540   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:16.032826   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:16.032850   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:16.032858   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:16.032867   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:16 GMT
	I1205 20:04:16.032876   26786 round_trippers.go:580]     Audit-Id: 7c976745-405e-4f2d-b239-7b1a85c724db
	I1205 20:04:16.032884   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:16.032894   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:16.032903   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:16.033110   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:16.526877   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:16.526906   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:16.526914   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:16.526921   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:16.531125   26786 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:16.531155   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:16.531181   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:16 GMT
	I1205 20:04:16.531189   26786 round_trippers.go:580]     Audit-Id: 1c4e76bf-5a0d-4ce3-8b9f-56862e49d3de
	I1205 20:04:16.531196   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:16.531204   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:16.531213   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:16.531221   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:16.531974   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:16.532395   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:16.532408   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:16.532415   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:16.532421   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:16.535964   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:16.535984   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:16.535993   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:16 GMT
	I1205 20:04:16.536001   26786 round_trippers.go:580]     Audit-Id: 3c7ece8a-177a-43c9-8be3-bfd154f6ef9d
	I1205 20:04:16.536007   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:16.536022   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:16.536030   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:16.536039   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:16.536547   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:16.536819   26786 pod_ready.go:102] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"False"
	I1205 20:04:17.026216   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:17.026239   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.026247   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.026253   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.029245   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.029271   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.029282   26786 round_trippers.go:580]     Audit-Id: a133a59e-3b52-4fd9-badd-4fbb9aa1c179
	I1205 20:04:17.029288   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.029293   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.029298   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.029304   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.029309   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.029809   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"428","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1205 20:04:17.030205   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.030217   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.030224   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.030230   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.032324   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.032342   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.032351   26786 round_trippers.go:580]     Audit-Id: 9aca9f57-fcae-4b6a-b931-91d6a8c6002e
	I1205 20:04:17.032358   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.032366   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.032373   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.032382   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.032391   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.032681   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:17.526364   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:17.526388   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.526397   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.526423   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.529021   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.529044   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.529051   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.529057   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.529062   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.529069   26786 round_trippers.go:580]     Audit-Id: 5e5d527f-eb4a-48e6-ba7b-4c7d2980c57a
	I1205 20:04:17.529076   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.529081   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.529262   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"449","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:04:17.529677   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.529690   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.529697   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.529703   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.532901   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:17.532923   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.532932   26786 round_trippers.go:580]     Audit-Id: a43081e7-37d6-44ea-910a-bbbc22f484f6
	I1205 20:04:17.532938   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.532943   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.532948   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.532953   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.532958   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.533855   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:17.534140   26786 pod_ready.go:92] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:17.534155   26786 pod_ready.go:81] duration metric: took 3.019414722s waiting for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.534164   26786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.534209   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-558947
	I1205 20:04:17.534221   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.534228   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.534234   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.537134   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.537155   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.537164   26786 round_trippers.go:580]     Audit-Id: 55256f18-886b-4620-8065-847bdd770c97
	I1205 20:04:17.537172   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.537184   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.537193   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.537202   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.537216   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.537808   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"438","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:04:17.538175   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.538190   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.538197   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.538203   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.541700   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:17.541724   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.541733   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.541739   26786 round_trippers.go:580]     Audit-Id: 266cf6a3-975b-4179-9fd9-3e9406b6fc37
	I1205 20:04:17.541744   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.541755   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.541764   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.541772   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.542757   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:17.543033   26786 pod_ready.go:92] pod "etcd-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:17.543048   26786 pod_ready.go:81] duration metric: took 8.879781ms waiting for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.543059   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.543115   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-558947
	I1205 20:04:17.543124   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.543131   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.543137   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.545884   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.545903   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.545911   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.545918   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.545923   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.545928   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.545933   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.545939   26786 round_trippers.go:580]     Audit-Id: de5c7b85-096a-4987-954d-4c217e6b1031
	I1205 20:04:17.546779   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-558947","namespace":"kube-system","uid":"36300192-b165-4bee-b791-9fce329428f9","resourceVersion":"440","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.3:8443","kubernetes.io/config.hash":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.mirror":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.seen":"2023-12-05T20:03:56.146037812Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7371 chars]
	I1205 20:04:17.547125   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.547138   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.547145   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.547151   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.548829   26786 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:17.548846   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.548853   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.548858   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.548865   26786 round_trippers.go:580]     Audit-Id: dfcbcf28-2f4b-4322-ae6c-19652fc42359
	I1205 20:04:17.548870   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.548875   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.548880   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.549090   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:17.549349   26786 pod_ready.go:92] pod "kube-apiserver-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:17.549362   26786 pod_ready.go:81] duration metric: took 6.297151ms waiting for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.549369   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.549409   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-558947
	I1205 20:04:17.549416   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.549423   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.549429   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.551624   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.551644   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.551653   26786 round_trippers.go:580]     Audit-Id: 59fe7149-2c52-47bb-84cb-771888ba0ee2
	I1205 20:04:17.551659   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.551664   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.551672   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.551681   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.551689   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.552129   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-558947","namespace":"kube-system","uid":"49ee6fa8-b7cd-4880-b4db-a1717b685750","resourceVersion":"439","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.mirror":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.seen":"2023-12-05T20:03:56.146038937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6946 chars]
	I1205 20:04:17.552466   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.552479   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.552485   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.552491   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.554607   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.554639   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.554647   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.554652   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.554658   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.554663   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.554668   26786 round_trippers.go:580]     Audit-Id: 3e1e9896-1db0-424d-84af-681ed330a3c3
	I1205 20:04:17.554673   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.554902   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:17.555163   26786 pod_ready.go:92] pod "kube-controller-manager-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:17.555176   26786 pod_ready.go:81] duration metric: took 5.801468ms waiting for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.555186   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.555225   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:04:17.555233   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.555239   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.555245   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.557284   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.557302   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.557311   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.557317   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.557322   26786 round_trippers.go:580]     Audit-Id: 31621199-eb4a-4b7c-9d82-b1f34a136aa5
	I1205 20:04:17.557327   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.557332   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.557337   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.557606   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgmt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"41275cfd-cb0f-4886-b1bc-a86b7e20cc14","resourceVersion":"412","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:04:17.702293   26786 request.go:629] Waited for 144.326171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.702358   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:17.702363   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.702370   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.702384   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.704951   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.704975   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.704982   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.704988   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.704993   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.704998   26786 round_trippers.go:580]     Audit-Id: 16197c28-e4a6-44aa-a489-faf218fd9c37
	I1205 20:04:17.705003   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.705008   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.705190   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:17.705472   26786 pod_ready.go:92] pod "kube-proxy-mgmt2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:17.705486   26786 pod_ready.go:81] duration metric: took 150.295409ms waiting for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.705494   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:17.901899   26786 request.go:629] Waited for 196.324641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:04:17.901956   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:04:17.901961   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:17.901968   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:17.901979   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:17.904709   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:17.904730   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:17.904736   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:17.904741   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:17.904746   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:17.904751   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:17.904757   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:17 GMT
	I1205 20:04:17.904762   26786 round_trippers.go:580]     Audit-Id: 0ea6cc89-741a-4243-b8b3-2627044d5db7
	I1205 20:04:17.904900   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-558947","namespace":"kube-system","uid":"526e311f-432f-4c9a-ad6e-19855cae55be","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.mirror":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.seen":"2023-12-05T20:03:56.146039635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:04:18.102670   26786 request.go:629] Waited for 197.418313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:18.102747   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:18.102752   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:18.102760   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.102766   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:18.105381   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:18.105407   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:18.105417   26786 round_trippers.go:580]     Audit-Id: 0733d712-27c8-4045-9ad0-4a354359fdba
	I1205 20:04:18.105423   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.105428   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.105434   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:18.105439   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:18.105445   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.105805   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:18.106108   26786 pod_ready.go:92] pod "kube-scheduler-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:18.106123   26786 pod_ready.go:81] duration metric: took 400.623688ms waiting for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:18.106132   26786 pod_ready.go:38] duration metric: took 3.60067032s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:04:18.106146   26786 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:04:18.106198   26786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:04:18.121579   26786 command_runner.go:130] > 1064
	I1205 20:04:18.121627   26786 api_server.go:72] duration metric: took 9.754607558s to wait for apiserver process to appear ...
	I1205 20:04:18.121637   26786 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:04:18.121654   26786 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:04:18.127647   26786 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I1205 20:04:18.127707   26786 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I1205 20:04:18.127713   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:18.127720   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.127729   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:18.128986   26786 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:18.129000   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:18.129006   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:18.129011   26786 round_trippers.go:580]     Content-Length: 264
	I1205 20:04:18.129017   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.129036   26786 round_trippers.go:580]     Audit-Id: a4403892-611e-4390-bb32-15cdc4aea904
	I1205 20:04:18.129050   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.129057   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.129065   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:18.129079   26786 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1205 20:04:18.129159   26786 api_server.go:141] control plane version: v1.28.4
	I1205 20:04:18.129172   26786 api_server.go:131] duration metric: took 7.529538ms to wait for apiserver health ...
	I1205 20:04:18.129179   26786 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:04:18.302593   26786 request.go:629] Waited for 173.33912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:18.302666   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:18.302672   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:18.302679   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.302688   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:18.306955   26786 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:18.306979   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:18.306987   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:18.306993   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:18.306998   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.307016   26786 round_trippers.go:580]     Audit-Id: 8dc14ccb-e41d-4b1f-9d7c-dd6646e62daf
	I1205 20:04:18.307021   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.307026   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.308164   26786 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"449","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53916 chars]
	I1205 20:04:18.309766   26786 system_pods.go:59] 8 kube-system pods found
	I1205 20:04:18.309786   26786 system_pods.go:61] "coredns-5dd5756b68-knl4d" [28d6c367-593c-469a-90c6-b3c13cedc3df] Running
	I1205 20:04:18.309790   26786 system_pods.go:61] "etcd-multinode-558947" [118e2032-1898-42c0-9aa2-3f15356e9ff3] Running
	I1205 20:04:18.309794   26786 system_pods.go:61] "kindnet-cv76g" [88acd23e-99f5-4c5f-a03c-1c961a511eac] Running
	I1205 20:04:18.309798   26786 system_pods.go:61] "kube-apiserver-multinode-558947" [36300192-b165-4bee-b791-9fce329428f9] Running
	I1205 20:04:18.309803   26786 system_pods.go:61] "kube-controller-manager-multinode-558947" [49ee6fa8-b7cd-4880-b4db-a1717b685750] Running
	I1205 20:04:18.309807   26786 system_pods.go:61] "kube-proxy-mgmt2" [41275cfd-cb0f-4886-b1bc-a86b7e20cc14] Running
	I1205 20:04:18.309813   26786 system_pods.go:61] "kube-scheduler-multinode-558947" [526e311f-432f-4c9a-ad6e-19855cae55be] Running
	I1205 20:04:18.309817   26786 system_pods.go:61] "storage-provisioner" [58d4c242-7ea5-49f5-999c-3c9135144038] Running
	I1205 20:04:18.309822   26786 system_pods.go:74] duration metric: took 180.638687ms to wait for pod list to return data ...
	I1205 20:04:18.309831   26786 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:04:18.502237   26786 request.go:629] Waited for 192.345683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:04:18.502324   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:04:18.502330   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:18.502338   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.502344   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:18.504890   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:18.504912   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:18.504919   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:18.504924   26786 round_trippers.go:580]     Content-Length: 261
	I1205 20:04:18.504933   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.504938   26786 round_trippers.go:580]     Audit-Id: 0ffd0c11-f24c-4d58-9689-b21dcad947e3
	I1205 20:04:18.504946   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.504951   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.504956   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:18.504975   26786 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a86eaa39-f2bf-4545-87d3-0c9eefaad8ac","resourceVersion":"341","creationTimestamp":"2023-12-05T20:04:08Z"}}]}
	I1205 20:04:18.505155   26786 default_sa.go:45] found service account: "default"
	I1205 20:04:18.505170   26786 default_sa.go:55] duration metric: took 195.334832ms for default service account to be created ...
	I1205 20:04:18.505177   26786 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:04:18.702653   26786 request.go:629] Waited for 197.415779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:18.702716   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:18.702723   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:18.702730   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.702748   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:18.706437   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:18.706466   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:18.706477   26786 round_trippers.go:580]     Audit-Id: f8ec4eb4-12a9-41a2-9b68-efc22013195a
	I1205 20:04:18.706486   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.706495   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.706503   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:18.706509   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:18.706515   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.707702   26786 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"449","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53916 chars]
	I1205 20:04:18.709350   26786 system_pods.go:86] 8 kube-system pods found
	I1205 20:04:18.709368   26786 system_pods.go:89] "coredns-5dd5756b68-knl4d" [28d6c367-593c-469a-90c6-b3c13cedc3df] Running
	I1205 20:04:18.709373   26786 system_pods.go:89] "etcd-multinode-558947" [118e2032-1898-42c0-9aa2-3f15356e9ff3] Running
	I1205 20:04:18.709378   26786 system_pods.go:89] "kindnet-cv76g" [88acd23e-99f5-4c5f-a03c-1c961a511eac] Running
	I1205 20:04:18.709384   26786 system_pods.go:89] "kube-apiserver-multinode-558947" [36300192-b165-4bee-b791-9fce329428f9] Running
	I1205 20:04:18.709398   26786 system_pods.go:89] "kube-controller-manager-multinode-558947" [49ee6fa8-b7cd-4880-b4db-a1717b685750] Running
	I1205 20:04:18.709404   26786 system_pods.go:89] "kube-proxy-mgmt2" [41275cfd-cb0f-4886-b1bc-a86b7e20cc14] Running
	I1205 20:04:18.709411   26786 system_pods.go:89] "kube-scheduler-multinode-558947" [526e311f-432f-4c9a-ad6e-19855cae55be] Running
	I1205 20:04:18.709421   26786 system_pods.go:89] "storage-provisioner" [58d4c242-7ea5-49f5-999c-3c9135144038] Running
	I1205 20:04:18.709428   26786 system_pods.go:126] duration metric: took 204.246081ms to wait for k8s-apps to be running ...
	I1205 20:04:18.709437   26786 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:04:18.709477   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:18.721919   26786 system_svc.go:56] duration metric: took 12.478167ms WaitForService to wait for kubelet.
	I1205 20:04:18.721938   26786 kubeadm.go:581] duration metric: took 10.354921239s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:04:18.721959   26786 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:04:18.902339   26786 request.go:629] Waited for 180.302428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I1205 20:04:18.902410   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I1205 20:04:18.902418   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:18.902430   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:18.902441   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:18.905061   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:18.905081   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:18.905090   26786 round_trippers.go:580]     Audit-Id: d4d080ca-ae76-4ecc-ab54-b80b980c4ee6
	I1205 20:04:18.905099   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:18.905106   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:18.905115   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:18.905122   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:18.905130   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:18 GMT
	I1205 20:04:18.905532   26786 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5950 chars]
	I1205 20:04:18.905868   26786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:04:18.905889   26786 node_conditions.go:123] node cpu capacity is 2
	I1205 20:04:18.905899   26786 node_conditions.go:105] duration metric: took 183.934476ms to run NodePressure ...
	I1205 20:04:18.905908   26786 start.go:228] waiting for startup goroutines ...
	I1205 20:04:18.905918   26786 start.go:233] waiting for cluster config update ...
	I1205 20:04:18.905929   26786 start.go:242] writing updated cluster config ...
	I1205 20:04:18.908337   26786 out.go:177] 
	I1205 20:04:18.909988   26786 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:18.910052   26786 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:04:18.911688   26786 out.go:177] * Starting worker node multinode-558947-m02 in cluster multinode-558947
	I1205 20:04:18.912950   26786 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:04:18.912970   26786 cache.go:56] Caching tarball of preloaded images
	I1205 20:04:18.913066   26786 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:04:18.913079   26786 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:04:18.913149   26786 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:04:18.913328   26786 start.go:365] acquiring machines lock for multinode-558947-m02: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:04:18.913370   26786 start.go:369] acquired machines lock for "multinode-558947-m02" in 23.242µs
	I1205 20:04:18.913394   26786 start.go:93] Provisioning new machine with config: &{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:04:18.913554   26786 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 20:04:18.915459   26786 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:04:18.915571   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:18.915608   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:18.929311   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I1205 20:04:18.929709   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:18.930105   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:18.930126   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:18.930442   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:18.930626   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:04:18.930765   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:18.930923   26786 start.go:159] libmachine.API.Create for "multinode-558947" (driver="kvm2")
	I1205 20:04:18.930945   26786 client.go:168] LocalClient.Create starting
	I1205 20:04:18.930973   26786 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem
	I1205 20:04:18.931008   26786 main.go:141] libmachine: Decoding PEM data...
	I1205 20:04:18.931024   26786 main.go:141] libmachine: Parsing certificate...
	I1205 20:04:18.931073   26786 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem
	I1205 20:04:18.931090   26786 main.go:141] libmachine: Decoding PEM data...
	I1205 20:04:18.931101   26786 main.go:141] libmachine: Parsing certificate...
	I1205 20:04:18.931117   26786 main.go:141] libmachine: Running pre-create checks...
	I1205 20:04:18.931125   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .PreCreateCheck
	I1205 20:04:18.931276   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetConfigRaw
	I1205 20:04:18.931626   26786 main.go:141] libmachine: Creating machine...
	I1205 20:04:18.931639   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .Create
	I1205 20:04:18.931766   26786 main.go:141] libmachine: (multinode-558947-m02) Creating KVM machine...
	I1205 20:04:18.932862   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found existing default KVM network
	I1205 20:04:18.932980   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found existing private KVM network mk-multinode-558947
	I1205 20:04:18.933090   26786 main.go:141] libmachine: (multinode-558947-m02) Setting up store path in /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02 ...
	I1205 20:04:18.933117   26786 main.go:141] libmachine: (multinode-558947-m02) Building disk image from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 20:04:18.933201   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:18.933082   27151 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:04:18.933275   26786 main.go:141] libmachine: (multinode-558947-m02) Downloading /home/jenkins/minikube-integration/17731-6237/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1205 20:04:19.133989   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:19.133881   27151 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa...
	I1205 20:04:19.373918   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:19.373813   27151 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/multinode-558947-m02.rawdisk...
	I1205 20:04:19.373947   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Writing magic tar header
	I1205 20:04:19.373960   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Writing SSH key tar header
	I1205 20:04:19.373969   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:19.373940   27151 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02 ...
	I1205 20:04:19.374103   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02
	I1205 20:04:19.374124   26786 main.go:141] libmachine: (multinode-558947-m02) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02 (perms=drwx------)
	I1205 20:04:19.374131   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines
	I1205 20:04:19.374147   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:04:19.374162   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237
	I1205 20:04:19.374179   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:04:19.374190   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:04:19.374198   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Checking permissions on dir: /home
	I1205 20:04:19.374207   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Skipping /home - not owner
	I1205 20:04:19.374217   26786 main.go:141] libmachine: (multinode-558947-m02) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:04:19.374232   26786 main.go:141] libmachine: (multinode-558947-m02) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube (perms=drwxr-xr-x)
	I1205 20:04:19.374242   26786 main.go:141] libmachine: (multinode-558947-m02) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237 (perms=drwxrwxr-x)
	I1205 20:04:19.374251   26786 main.go:141] libmachine: (multinode-558947-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:04:19.374263   26786 main.go:141] libmachine: (multinode-558947-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:04:19.374287   26786 main.go:141] libmachine: (multinode-558947-m02) Creating domain...
	I1205 20:04:19.375062   26786 main.go:141] libmachine: (multinode-558947-m02) define libvirt domain using xml: 
	I1205 20:04:19.375095   26786 main.go:141] libmachine: (multinode-558947-m02) <domain type='kvm'>
	I1205 20:04:19.375105   26786 main.go:141] libmachine: (multinode-558947-m02)   <name>multinode-558947-m02</name>
	I1205 20:04:19.375128   26786 main.go:141] libmachine: (multinode-558947-m02)   <memory unit='MiB'>2200</memory>
	I1205 20:04:19.375156   26786 main.go:141] libmachine: (multinode-558947-m02)   <vcpu>2</vcpu>
	I1205 20:04:19.375179   26786 main.go:141] libmachine: (multinode-558947-m02)   <features>
	I1205 20:04:19.375211   26786 main.go:141] libmachine: (multinode-558947-m02)     <acpi/>
	I1205 20:04:19.375234   26786 main.go:141] libmachine: (multinode-558947-m02)     <apic/>
	I1205 20:04:19.375249   26786 main.go:141] libmachine: (multinode-558947-m02)     <pae/>
	I1205 20:04:19.375259   26786 main.go:141] libmachine: (multinode-558947-m02)     
	I1205 20:04:19.375277   26786 main.go:141] libmachine: (multinode-558947-m02)   </features>
	I1205 20:04:19.375290   26786 main.go:141] libmachine: (multinode-558947-m02)   <cpu mode='host-passthrough'>
	I1205 20:04:19.375303   26786 main.go:141] libmachine: (multinode-558947-m02)   
	I1205 20:04:19.375317   26786 main.go:141] libmachine: (multinode-558947-m02)   </cpu>
	I1205 20:04:19.375329   26786 main.go:141] libmachine: (multinode-558947-m02)   <os>
	I1205 20:04:19.375342   26786 main.go:141] libmachine: (multinode-558947-m02)     <type>hvm</type>
	I1205 20:04:19.375357   26786 main.go:141] libmachine: (multinode-558947-m02)     <boot dev='cdrom'/>
	I1205 20:04:19.375370   26786 main.go:141] libmachine: (multinode-558947-m02)     <boot dev='hd'/>
	I1205 20:04:19.375384   26786 main.go:141] libmachine: (multinode-558947-m02)     <bootmenu enable='no'/>
	I1205 20:04:19.375399   26786 main.go:141] libmachine: (multinode-558947-m02)   </os>
	I1205 20:04:19.375410   26786 main.go:141] libmachine: (multinode-558947-m02)   <devices>
	I1205 20:04:19.375422   26786 main.go:141] libmachine: (multinode-558947-m02)     <disk type='file' device='cdrom'>
	I1205 20:04:19.375442   26786 main.go:141] libmachine: (multinode-558947-m02)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/boot2docker.iso'/>
	I1205 20:04:19.375456   26786 main.go:141] libmachine: (multinode-558947-m02)       <target dev='hdc' bus='scsi'/>
	I1205 20:04:19.375477   26786 main.go:141] libmachine: (multinode-558947-m02)       <readonly/>
	I1205 20:04:19.375499   26786 main.go:141] libmachine: (multinode-558947-m02)     </disk>
	I1205 20:04:19.375514   26786 main.go:141] libmachine: (multinode-558947-m02)     <disk type='file' device='disk'>
	I1205 20:04:19.375529   26786 main.go:141] libmachine: (multinode-558947-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:04:19.375550   26786 main.go:141] libmachine: (multinode-558947-m02)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/multinode-558947-m02.rawdisk'/>
	I1205 20:04:19.375563   26786 main.go:141] libmachine: (multinode-558947-m02)       <target dev='hda' bus='virtio'/>
	I1205 20:04:19.375575   26786 main.go:141] libmachine: (multinode-558947-m02)     </disk>
	I1205 20:04:19.375586   26786 main.go:141] libmachine: (multinode-558947-m02)     <interface type='network'>
	I1205 20:04:19.375593   26786 main.go:141] libmachine: (multinode-558947-m02)       <source network='mk-multinode-558947'/>
	I1205 20:04:19.375601   26786 main.go:141] libmachine: (multinode-558947-m02)       <model type='virtio'/>
	I1205 20:04:19.375608   26786 main.go:141] libmachine: (multinode-558947-m02)     </interface>
	I1205 20:04:19.375616   26786 main.go:141] libmachine: (multinode-558947-m02)     <interface type='network'>
	I1205 20:04:19.375622   26786 main.go:141] libmachine: (multinode-558947-m02)       <source network='default'/>
	I1205 20:04:19.375631   26786 main.go:141] libmachine: (multinode-558947-m02)       <model type='virtio'/>
	I1205 20:04:19.375637   26786 main.go:141] libmachine: (multinode-558947-m02)     </interface>
	I1205 20:04:19.375644   26786 main.go:141] libmachine: (multinode-558947-m02)     <serial type='pty'>
	I1205 20:04:19.375653   26786 main.go:141] libmachine: (multinode-558947-m02)       <target port='0'/>
	I1205 20:04:19.375662   26786 main.go:141] libmachine: (multinode-558947-m02)     </serial>
	I1205 20:04:19.375671   26786 main.go:141] libmachine: (multinode-558947-m02)     <console type='pty'>
	I1205 20:04:19.375677   26786 main.go:141] libmachine: (multinode-558947-m02)       <target type='serial' port='0'/>
	I1205 20:04:19.375685   26786 main.go:141] libmachine: (multinode-558947-m02)     </console>
	I1205 20:04:19.375691   26786 main.go:141] libmachine: (multinode-558947-m02)     <rng model='virtio'>
	I1205 20:04:19.375700   26786 main.go:141] libmachine: (multinode-558947-m02)       <backend model='random'>/dev/random</backend>
	I1205 20:04:19.375705   26786 main.go:141] libmachine: (multinode-558947-m02)     </rng>
	I1205 20:04:19.375714   26786 main.go:141] libmachine: (multinode-558947-m02)     
	I1205 20:04:19.375719   26786 main.go:141] libmachine: (multinode-558947-m02)     
	I1205 20:04:19.375726   26786 main.go:141] libmachine: (multinode-558947-m02)   </devices>
	I1205 20:04:19.375731   26786 main.go:141] libmachine: (multinode-558947-m02) </domain>
	I1205 20:04:19.375752   26786 main.go:141] libmachine: (multinode-558947-m02) 
	I1205 20:04:19.382387   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:5f:d1:f5 in network default
	I1205 20:04:19.382908   26786 main.go:141] libmachine: (multinode-558947-m02) Ensuring networks are active...
	I1205 20:04:19.382926   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:19.383577   26786 main.go:141] libmachine: (multinode-558947-m02) Ensuring network default is active
	I1205 20:04:19.383896   26786 main.go:141] libmachine: (multinode-558947-m02) Ensuring network mk-multinode-558947 is active
	I1205 20:04:19.384318   26786 main.go:141] libmachine: (multinode-558947-m02) Getting domain xml...
	I1205 20:04:19.385061   26786 main.go:141] libmachine: (multinode-558947-m02) Creating domain...
	I1205 20:04:20.631947   26786 main.go:141] libmachine: (multinode-558947-m02) Waiting to get IP...
	I1205 20:04:20.632691   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:20.633190   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:20.633250   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:20.633184   27151 retry.go:31] will retry after 251.046755ms: waiting for machine to come up
	I1205 20:04:20.885819   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:20.886233   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:20.886261   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:20.886180   27151 retry.go:31] will retry after 239.225929ms: waiting for machine to come up
	I1205 20:04:21.126730   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:21.127246   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:21.127279   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:21.127193   27151 retry.go:31] will retry after 430.374805ms: waiting for machine to come up
	I1205 20:04:21.558939   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:21.559389   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:21.559413   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:21.559351   27151 retry.go:31] will retry after 524.831306ms: waiting for machine to come up
	I1205 20:04:22.085944   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:22.086498   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:22.086542   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:22.086442   27151 retry.go:31] will retry after 684.609835ms: waiting for machine to come up
	I1205 20:04:22.772167   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:22.772642   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:22.772674   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:22.772589   27151 retry.go:31] will retry after 773.902803ms: waiting for machine to come up
	I1205 20:04:23.548355   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:23.548675   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:23.548708   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:23.548627   27151 retry.go:31] will retry after 740.524809ms: waiting for machine to come up
	I1205 20:04:24.290580   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:24.291044   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:24.291082   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:24.291003   27151 retry.go:31] will retry after 1.294175881s: waiting for machine to come up
	I1205 20:04:25.587389   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:25.587838   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:25.587867   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:25.587792   27151 retry.go:31] will retry after 1.774200226s: waiting for machine to come up
	I1205 20:04:27.363443   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:27.363859   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:27.363896   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:27.363793   27151 retry.go:31] will retry after 1.762821201s: waiting for machine to come up
	I1205 20:04:29.128316   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:29.128695   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:29.128742   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:29.128641   27151 retry.go:31] will retry after 2.434162887s: waiting for machine to come up
	I1205 20:04:31.566095   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:31.566530   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:31.566557   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:31.566505   27151 retry.go:31] will retry after 2.503049627s: waiting for machine to come up
	I1205 20:04:34.071492   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:34.071849   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:34.071866   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:34.071842   27151 retry.go:31] will retry after 2.906005728s: waiting for machine to come up
	I1205 20:04:36.979341   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:36.979720   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find current IP address of domain multinode-558947-m02 in network mk-multinode-558947
	I1205 20:04:36.979745   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | I1205 20:04:36.979673   27151 retry.go:31] will retry after 5.243967536s: waiting for machine to come up
	I1205 20:04:42.227147   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.227629   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has current primary IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.227644   26786 main.go:141] libmachine: (multinode-558947-m02) Found IP for machine: 192.168.39.10
	I1205 20:04:42.227670   26786 main.go:141] libmachine: (multinode-558947-m02) Reserving static IP address...
	I1205 20:04:42.228055   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | unable to find host DHCP lease matching {name: "multinode-558947-m02", mac: "52:54:00:78:96:d8", ip: "192.168.39.10"} in network mk-multinode-558947
	I1205 20:04:42.299732   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Getting to WaitForSSH function...
	I1205 20:04:42.299774   26786 main.go:141] libmachine: (multinode-558947-m02) Reserved static IP address: 192.168.39.10
	I1205 20:04:42.299789   26786 main.go:141] libmachine: (multinode-558947-m02) Waiting for SSH to be available...
	I1205 20:04:42.302452   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.302775   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.302806   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.302868   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Using SSH client type: external
	I1205 20:04:42.302913   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa (-rw-------)
	I1205 20:04:42.302956   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:04:42.302972   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | About to run SSH command:
	I1205 20:04:42.302992   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | exit 0
	I1205 20:04:42.393784   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 20:04:42.394048   26786 main.go:141] libmachine: (multinode-558947-m02) KVM machine creation complete!
	I1205 20:04:42.394371   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetConfigRaw
	I1205 20:04:42.394928   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:42.395151   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:42.395319   26786 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:04:42.395336   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetState
	I1205 20:04:42.396547   26786 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:04:42.396561   26786 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:04:42.396567   26786 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:04:42.396576   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:42.398973   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.399357   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.399387   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.399575   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:42.399753   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.399903   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.400026   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:42.400233   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:42.400613   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:04:42.400630   26786 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:04:42.517404   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:04:42.517428   26786 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:04:42.517437   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:42.520209   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.520711   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.520758   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.520911   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:42.521124   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.521316   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.521473   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:42.521638   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:42.522009   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:04:42.522025   26786 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:04:42.638972   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1205 20:04:42.639044   26786 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:04:42.639058   26786 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:04:42.639070   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:04:42.639343   26786 buildroot.go:166] provisioning hostname "multinode-558947-m02"
	I1205 20:04:42.639368   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:04:42.639476   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:42.641984   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.642307   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.642334   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.642521   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:42.642712   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.642892   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.643000   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:42.643125   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:42.643453   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:04:42.643472   26786 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-558947-m02 && echo "multinode-558947-m02" | sudo tee /etc/hostname
	I1205 20:04:42.779084   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-558947-m02
	
	I1205 20:04:42.779121   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:42.781489   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.781850   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.781877   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.782052   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:42.782265   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.782444   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:42.782601   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:42.782779   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:42.783082   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:04:42.783099   26786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-558947-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-558947-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-558947-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:04:42.911259   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:04:42.911288   26786 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:04:42.911303   26786 buildroot.go:174] setting up certificates
	I1205 20:04:42.911311   26786 provision.go:83] configureAuth start
	I1205 20:04:42.911321   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:04:42.911564   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:04:42.914235   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.914610   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.914642   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.914782   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:42.917087   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.917455   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:42.917487   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:42.917573   26786 provision.go:138] copyHostCerts
	I1205 20:04:42.917603   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:04:42.917633   26786 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:04:42.917644   26786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:04:42.917709   26786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:04:42.917776   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:04:42.917795   26786 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:04:42.917802   26786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:04:42.917826   26786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:04:42.917872   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:04:42.917890   26786 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:04:42.917896   26786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:04:42.917917   26786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:04:42.917958   26786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.multinode-558947-m02 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube multinode-558947-m02]
	I1205 20:04:43.048131   26786 provision.go:172] copyRemoteCerts
	I1205 20:04:43.048184   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:04:43.048206   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:43.050649   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.050935   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.050966   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.051132   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:43.051319   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.051460   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:43.051591   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:04:43.139832   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:04:43.139903   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:04:43.163623   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:04:43.163702   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1205 20:04:43.186031   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:04:43.186126   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:04:43.208859   26786 provision.go:86] duration metric: configureAuth took 297.535132ms
	I1205 20:04:43.208890   26786 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:04:43.209058   26786 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:43.209125   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:43.211797   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.212189   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.212222   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.212440   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:43.212629   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.212756   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.212882   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:43.213023   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:43.213409   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:04:43.213435   26786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:04:43.525476   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:04:43.525500   26786 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:04:43.525512   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetURL
	I1205 20:04:43.526867   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | Using libvirt version 6000000
	I1205 20:04:43.529188   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.529585   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.529609   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.529770   26786 main.go:141] libmachine: Docker is up and running!
	I1205 20:04:43.529783   26786 main.go:141] libmachine: Reticulating splines...
	I1205 20:04:43.529788   26786 client.go:171] LocalClient.Create took 24.598834443s
	I1205 20:04:43.529810   26786 start.go:167] duration metric: libmachine.API.Create for "multinode-558947" took 24.598887255s
	I1205 20:04:43.529842   26786 start.go:300] post-start starting for "multinode-558947-m02" (driver="kvm2")
	I1205 20:04:43.529854   26786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:04:43.529869   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:43.530147   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:04:43.530181   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:43.532752   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.533243   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.533281   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.533381   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:43.533533   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.533707   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:43.533883   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:04:43.621635   26786 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:04:43.625741   26786 command_runner.go:130] > NAME=Buildroot
	I1205 20:04:43.625766   26786 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1205 20:04:43.625776   26786 command_runner.go:130] > ID=buildroot
	I1205 20:04:43.625783   26786 command_runner.go:130] > VERSION_ID=2021.02.12
	I1205 20:04:43.625788   26786 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1205 20:04:43.625815   26786 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:04:43.625829   26786 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:04:43.625882   26786 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:04:43.625950   26786 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:04:43.625959   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /etc/ssl/certs/134102.pem
	I1205 20:04:43.626036   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:04:43.635102   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:04:43.656912   26786 start.go:303] post-start completed in 127.052682ms
	I1205 20:04:43.656966   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetConfigRaw
	I1205 20:04:43.657502   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:04:43.660274   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.660605   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.660635   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.660913   26786 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:04:43.661131   26786 start.go:128] duration metric: createHost completed in 24.747564004s
	I1205 20:04:43.661162   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:43.663320   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.663646   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.663674   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.663829   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:43.664017   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.664186   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.664320   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:43.664474   26786 main.go:141] libmachine: Using SSH client type: native
	I1205 20:04:43.664777   26786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:04:43.664790   26786 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:04:43.783382   26786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701806683.765807180
	
	I1205 20:04:43.783406   26786 fix.go:206] guest clock: 1701806683.765807180
	I1205 20:04:43.783416   26786 fix.go:219] Guest: 2023-12-05 20:04:43.76580718 +0000 UTC Remote: 2023-12-05 20:04:43.66114568 +0000 UTC m=+93.082063592 (delta=104.6615ms)
	I1205 20:04:43.783456   26786 fix.go:190] guest clock delta is within tolerance: 104.6615ms
	I1205 20:04:43.783466   26786 start.go:83] releasing machines lock for "multinode-558947-m02", held for 24.870083505s
	I1205 20:04:43.783489   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:43.783736   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:04:43.786211   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.786614   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.786637   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.789133   26786 out.go:177] * Found network options:
	I1205 20:04:43.790517   26786 out.go:177]   - NO_PROXY=192.168.39.3
	W1205 20:04:43.791844   26786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:04:43.791886   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:43.792533   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:43.792742   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:04:43.792867   26786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:04:43.792925   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	W1205 20:04:43.792962   26786 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:04:43.793023   26786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:04:43.793042   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:04:43.795642   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.795919   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.796036   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.796064   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.796219   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:43.796218   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:43.796244   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:43.796425   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.796436   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:04:43.796584   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:43.796652   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:04:43.796748   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:04:43.797351   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:04:43.797489   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:04:44.037565   26786 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:04:44.037618   26786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:04:44.043466   26786 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:04:44.043510   26786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:04:44.043571   26786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:04:44.059390   26786 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1205 20:04:44.059449   26786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:04:44.059457   26786 start.go:475] detecting cgroup driver to use...
	I1205 20:04:44.059516   26786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:04:44.073596   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:04:44.086639   26786 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:04:44.086689   26786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:04:44.099902   26786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:04:44.114203   26786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:04:44.226933   26786 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1205 20:04:44.227014   26786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:04:44.240285   26786 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 20:04:44.349240   26786 docker.go:219] disabling docker service ...
	I1205 20:04:44.349300   26786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:04:44.364289   26786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:04:44.376484   26786 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1205 20:04:44.376572   26786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:04:44.389857   26786 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 20:04:44.490684   26786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:04:44.607372   26786 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1205 20:04:44.607401   26786 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 20:04:44.607462   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:04:44.620573   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:04:44.639701   26786 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:04:44.639733   26786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:04:44.639775   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:44.649758   26786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:04:44.649825   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:44.659979   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:44.670040   26786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:04:44.679929   26786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:04:44.690300   26786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:04:44.700532   26786 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:04:44.700591   26786 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:04:44.700629   26786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:04:44.713490   26786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:04:44.724180   26786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:04:44.848208   26786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:04:45.021984   26786 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:04:45.022047   26786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:04:45.028018   26786 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:04:45.028039   26786 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:04:45.028045   26786 command_runner.go:130] > Device: 16h/22d	Inode: 698         Links: 1
	I1205 20:04:45.028052   26786 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:04:45.028057   26786 command_runner.go:130] > Access: 2023-12-05 20:04:44.993701303 +0000
	I1205 20:04:45.028063   26786 command_runner.go:130] > Modify: 2023-12-05 20:04:44.993701303 +0000
	I1205 20:04:45.028068   26786 command_runner.go:130] > Change: 2023-12-05 20:04:44.993701303 +0000
	I1205 20:04:45.028071   26786 command_runner.go:130] >  Birth: -
	I1205 20:04:45.028084   26786 start.go:543] Will wait 60s for crictl version
	I1205 20:04:45.028121   26786 ssh_runner.go:195] Run: which crictl
	I1205 20:04:45.032343   26786 command_runner.go:130] > /usr/bin/crictl
	I1205 20:04:45.032586   26786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:04:45.072747   26786 command_runner.go:130] > Version:  0.1.0
	I1205 20:04:45.072776   26786 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:04:45.072803   26786 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1205 20:04:45.072819   26786 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:04:45.074250   26786 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:04:45.074325   26786 ssh_runner.go:195] Run: crio --version
	I1205 20:04:45.122729   26786 command_runner.go:130] > crio version 1.24.1
	I1205 20:04:45.122753   26786 command_runner.go:130] > Version:          1.24.1
	I1205 20:04:45.122761   26786 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:04:45.122766   26786 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:04:45.122771   26786 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:04:45.122779   26786 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:04:45.122783   26786 command_runner.go:130] > Compiler:         gc
	I1205 20:04:45.122788   26786 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:04:45.122793   26786 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:04:45.122801   26786 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:04:45.122806   26786 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:04:45.122816   26786 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:04:45.124050   26786 ssh_runner.go:195] Run: crio --version
	I1205 20:04:45.168474   26786 command_runner.go:130] > crio version 1.24.1
	I1205 20:04:45.168492   26786 command_runner.go:130] > Version:          1.24.1
	I1205 20:04:45.168499   26786 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:04:45.168504   26786 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:04:45.168509   26786 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:04:45.168514   26786 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:04:45.168518   26786 command_runner.go:130] > Compiler:         gc
	I1205 20:04:45.168523   26786 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:04:45.168531   26786 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:04:45.168543   26786 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:04:45.168554   26786 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:04:45.168562   26786 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:04:45.170700   26786 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:04:45.172311   26786 out.go:177]   - env NO_PROXY=192.168.39.3
	I1205 20:04:45.173669   26786 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:04:45.176185   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:45.176490   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:04:45.176516   26786 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:04:45.176757   26786 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:04:45.181105   26786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:04:45.193275   26786 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947 for IP: 192.168.39.10
	I1205 20:04:45.193308   26786 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:04:45.193458   26786 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:04:45.193507   26786 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:04:45.193526   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:04:45.193542   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:04:45.193557   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:04:45.193569   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:04:45.193644   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:04:45.193715   26786 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:04:45.193742   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:04:45.193788   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:04:45.193827   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:04:45.193859   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:04:45.193916   26786 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:04:45.193953   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /usr/share/ca-certificates/134102.pem
	I1205 20:04:45.193973   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:45.193992   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem -> /usr/share/ca-certificates/13410.pem
	I1205 20:04:45.194405   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:04:45.218665   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:04:45.242885   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:04:45.266434   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:04:45.290115   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:04:45.313275   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:04:45.336586   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:04:45.359073   26786 ssh_runner.go:195] Run: openssl version
	I1205 20:04:45.364473   26786 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1205 20:04:45.364541   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:04:45.374899   26786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:04:45.379173   26786 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:04:45.379195   26786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:04:45.379236   26786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:04:45.384339   26786 command_runner.go:130] > 3ec20f2e
	I1205 20:04:45.384433   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:04:45.394165   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:04:45.405186   26786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:45.409445   26786 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:45.409755   26786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:45.409800   26786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:04:45.414883   26786 command_runner.go:130] > b5213941
	I1205 20:04:45.415017   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:04:45.425420   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:04:45.436143   26786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:04:45.441188   26786 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:04:45.441217   26786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:04:45.441259   26786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:04:45.446984   26786 command_runner.go:130] > 51391683
	I1205 20:04:45.447237   26786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:04:45.459106   26786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:04:45.463290   26786 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:04:45.463507   26786 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:04:45.463609   26786 ssh_runner.go:195] Run: crio config
	I1205 20:04:45.517289   26786 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:04:45.517312   26786 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:04:45.517324   26786 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:04:45.517329   26786 command_runner.go:130] > #
	I1205 20:04:45.517340   26786 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:04:45.517352   26786 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:04:45.517361   26786 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:04:45.517371   26786 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:04:45.517375   26786 command_runner.go:130] > # reload'.
	I1205 20:04:45.517381   26786 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:04:45.517391   26786 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:04:45.517397   26786 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:04:45.517402   26786 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:04:45.517408   26786 command_runner.go:130] > [crio]
	I1205 20:04:45.517414   26786 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:04:45.517420   26786 command_runner.go:130] > # containers images, in this directory.
	I1205 20:04:45.517424   26786 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:04:45.517441   26786 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:04:45.517451   26786 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:04:45.517460   26786 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:04:45.517471   26786 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:04:45.517479   26786 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:04:45.517489   26786 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:04:45.517503   26786 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:04:45.517513   26786 command_runner.go:130] > storage_option = [
	I1205 20:04:45.517521   26786 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:04:45.517531   26786 command_runner.go:130] > ]
	I1205 20:04:45.517543   26786 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:04:45.517556   26786 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:04:45.517597   26786 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:04:45.517612   26786 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:04:45.517618   26786 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:04:45.517622   26786 command_runner.go:130] > # always happen on a node reboot
	I1205 20:04:45.517627   26786 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:04:45.517633   26786 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:04:45.517643   26786 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:04:45.517656   26786 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:04:45.517668   26786 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:04:45.517685   26786 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:04:45.517701   26786 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:04:45.517737   26786 command_runner.go:130] > # internal_wipe = true
	I1205 20:04:45.517751   26786 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:04:45.517759   26786 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:04:45.517769   26786 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:04:45.517782   26786 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:04:45.517793   26786 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:04:45.517803   26786 command_runner.go:130] > [crio.api]
	I1205 20:04:45.517813   26786 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:04:45.517823   26786 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:04:45.517833   26786 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:04:45.517844   26786 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:04:45.517855   26786 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:04:45.517867   26786 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:04:45.517877   26786 command_runner.go:130] > # stream_port = "0"
	I1205 20:04:45.517887   26786 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:04:45.517897   26786 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:04:45.517907   26786 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:04:45.517917   26786 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:04:45.517928   26786 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:04:45.517941   26786 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:04:45.517951   26786 command_runner.go:130] > # minutes.
	I1205 20:04:45.517959   26786 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:04:45.517973   26786 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:04:45.517984   26786 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:04:45.517995   26786 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:04:45.518005   26786 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:04:45.518018   26786 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:04:45.518030   26786 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:04:45.518037   26786 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:04:45.518046   26786 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:04:45.518057   26786 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:04:45.518069   26786 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:04:45.518081   26786 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:04:45.518123   26786 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:04:45.518136   26786 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:04:45.518144   26786 command_runner.go:130] > [crio.runtime]
	I1205 20:04:45.518152   26786 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:04:45.518164   26786 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:04:45.518173   26786 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:04:45.518184   26786 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:04:45.518194   26786 command_runner.go:130] > # default_ulimits = [
	I1205 20:04:45.518201   26786 command_runner.go:130] > # ]
	I1205 20:04:45.518215   26786 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:04:45.518223   26786 command_runner.go:130] > # no_pivot = false
	I1205 20:04:45.518232   26786 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:04:45.518249   26786 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:04:45.518260   26786 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:04:45.518282   26786 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:04:45.518294   26786 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:04:45.518306   26786 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:04:45.518317   26786 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:04:45.518325   26786 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:04:45.518336   26786 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:04:45.518346   26786 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:04:45.518361   26786 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:04:45.518370   26786 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:04:45.518386   26786 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:04:45.518396   26786 command_runner.go:130] > conmon_env = [
	I1205 20:04:45.518406   26786 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:04:45.518415   26786 command_runner.go:130] > ]
	I1205 20:04:45.518425   26786 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:04:45.518433   26786 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:04:45.518441   26786 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:04:45.518451   26786 command_runner.go:130] > # default_env = [
	I1205 20:04:45.518457   26786 command_runner.go:130] > # ]
	I1205 20:04:45.518470   26786 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:04:45.518481   26786 command_runner.go:130] > # selinux = false
	I1205 20:04:45.518492   26786 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:04:45.518505   26786 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:04:45.518516   26786 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:04:45.518522   26786 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:04:45.518534   26786 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:04:45.518545   26786 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:04:45.518558   26786 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:04:45.518566   26786 command_runner.go:130] > # which might increase security.
	I1205 20:04:45.518576   26786 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:04:45.518586   26786 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:04:45.518600   26786 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:04:45.518613   26786 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:04:45.518624   26786 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:04:45.518632   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:45.518639   26786 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:04:45.518651   26786 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:04:45.518658   26786 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:04:45.518667   26786 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:04:45.518675   26786 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:04:45.518684   26786 command_runner.go:130] > # irqbalance daemon.
	I1205 20:04:45.518693   26786 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:04:45.518707   26786 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:04:45.518720   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:45.518729   26786 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:04:45.518739   26786 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:04:45.518749   26786 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:04:45.518765   26786 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:04:45.518773   26786 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:04:45.518787   26786 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:04:45.518801   26786 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:04:45.518810   26786 command_runner.go:130] > # will be added.
	I1205 20:04:45.518818   26786 command_runner.go:130] > # default_capabilities = [
	I1205 20:04:45.518828   26786 command_runner.go:130] > # 	"CHOWN",
	I1205 20:04:45.518835   26786 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:04:45.518845   26786 command_runner.go:130] > # 	"FSETID",
	I1205 20:04:45.518855   26786 command_runner.go:130] > # 	"FOWNER",
	I1205 20:04:45.518865   26786 command_runner.go:130] > # 	"SETGID",
	I1205 20:04:45.518874   26786 command_runner.go:130] > # 	"SETUID",
	I1205 20:04:45.518882   26786 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:04:45.518896   26786 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:04:45.518903   26786 command_runner.go:130] > # 	"KILL",
	I1205 20:04:45.518909   26786 command_runner.go:130] > # ]
	I1205 20:04:45.518923   26786 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:04:45.518936   26786 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:04:45.518944   26786 command_runner.go:130] > # default_sysctls = [
	I1205 20:04:45.518953   26786 command_runner.go:130] > # ]
	I1205 20:04:45.518963   26786 command_runner.go:130] > # List of devices on the host that a
	I1205 20:04:45.518976   26786 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:04:45.518985   26786 command_runner.go:130] > # allowed_devices = [
	I1205 20:04:45.518991   26786 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:04:45.519001   26786 command_runner.go:130] > # ]
	I1205 20:04:45.519010   26786 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:04:45.519026   26786 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:04:45.519038   26786 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:04:45.519079   26786 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:04:45.519090   26786 command_runner.go:130] > # additional_devices = [
	I1205 20:04:45.519096   26786 command_runner.go:130] > # ]
	I1205 20:04:45.519106   26786 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:04:45.519116   26786 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:04:45.519123   26786 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:04:45.519133   26786 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:04:45.519139   26786 command_runner.go:130] > # ]
	I1205 20:04:45.519153   26786 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:04:45.519165   26786 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:04:45.519169   26786 command_runner.go:130] > # Defaults to false.
	I1205 20:04:45.519174   26786 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:04:45.519182   26786 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:04:45.519188   26786 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:04:45.519194   26786 command_runner.go:130] > # hooks_dir = [
	I1205 20:04:45.519199   26786 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:04:45.519205   26786 command_runner.go:130] > # ]
	I1205 20:04:45.519211   26786 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:04:45.519217   26786 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:04:45.519224   26786 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:04:45.519228   26786 command_runner.go:130] > #
	I1205 20:04:45.519235   26786 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:04:45.519249   26786 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:04:45.519261   26786 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:04:45.519269   26786 command_runner.go:130] > #
	I1205 20:04:45.519279   26786 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:04:45.519294   26786 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:04:45.519305   26786 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:04:45.519314   26786 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:04:45.519320   26786 command_runner.go:130] > #
	I1205 20:04:45.519329   26786 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:04:45.519338   26786 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:04:45.519352   26786 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:04:45.519360   26786 command_runner.go:130] > pids_limit = 1024
	I1205 20:04:45.519373   26786 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:04:45.519386   26786 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:04:45.519398   26786 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:04:45.519413   26786 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:04:45.519423   26786 command_runner.go:130] > # log_size_max = -1
	I1205 20:04:45.519435   26786 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:04:45.519446   26786 command_runner.go:130] > # log_to_journald = false
	I1205 20:04:45.519456   26786 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:04:45.519466   26786 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:04:45.519472   26786 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:04:45.519487   26786 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:04:45.519497   26786 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:04:45.519507   26786 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:04:45.519517   26786 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:04:45.519527   26786 command_runner.go:130] > # read_only = false
	I1205 20:04:45.519538   26786 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:04:45.519553   26786 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:04:45.519561   26786 command_runner.go:130] > # live configuration reload.
	I1205 20:04:45.519568   26786 command_runner.go:130] > # log_level = "info"
	I1205 20:04:45.519576   26786 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:04:45.519586   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:45.519593   26786 command_runner.go:130] > # log_filter = ""
	I1205 20:04:45.519606   26786 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:04:45.519621   26786 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:04:45.519631   26786 command_runner.go:130] > # separated by comma.
	I1205 20:04:45.519638   26786 command_runner.go:130] > # uid_mappings = ""
	I1205 20:04:45.519646   26786 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:04:45.519658   26786 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:04:45.519668   26786 command_runner.go:130] > # separated by comma.
	I1205 20:04:45.519677   26786 command_runner.go:130] > # gid_mappings = ""
	I1205 20:04:45.519691   26786 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:04:45.519704   26786 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:04:45.519718   26786 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:04:45.519728   26786 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:04:45.519739   26786 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:04:45.519746   26786 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:04:45.519759   26786 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:04:45.519771   26786 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:04:45.519782   26786 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:04:45.519795   26786 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:04:45.519807   26786 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:04:45.519817   26786 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:04:45.519830   26786 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:04:45.519841   26786 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:04:45.519846   26786 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:04:45.519857   26786 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:04:45.519889   26786 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:04:45.519903   26786 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:04:45.519915   26786 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:04:45.519930   26786 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:04:45.519940   26786 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:04:45.519955   26786 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:04:45.519966   26786 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:04:45.519977   26786 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:04:45.519989   26786 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:04:45.520000   26786 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:04:45.520013   26786 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:04:45.520022   26786 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:04:45.520032   26786 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:04:45.520044   26786 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:04:45.520054   26786 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:04:45.520070   26786 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:04:45.520087   26786 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:04:45.520098   26786 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:04:45.520112   26786 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:04:45.520120   26786 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:04:45.520128   26786 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:04:45.520137   26786 command_runner.go:130] > # ]
	I1205 20:04:45.520149   26786 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:04:45.520163   26786 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:04:45.520177   26786 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:04:45.520190   26786 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:04:45.520199   26786 command_runner.go:130] > #
	I1205 20:04:45.520209   26786 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:04:45.520220   26786 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:04:45.520229   26786 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:04:45.520240   26786 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:04:45.520254   26786 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:04:45.520265   26786 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:04:45.520273   26786 command_runner.go:130] > # Where:
	I1205 20:04:45.520285   26786 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:04:45.520295   26786 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:04:45.520306   26786 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:04:45.520319   26786 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:04:45.520330   26786 command_runner.go:130] > #   in $PATH.
	I1205 20:04:45.520343   26786 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:04:45.520355   26786 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:04:45.520369   26786 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:04:45.520379   26786 command_runner.go:130] > #   state.
	I1205 20:04:45.520388   26786 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:04:45.520397   26786 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:04:45.520406   26786 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:04:45.520420   26786 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:04:45.520431   26786 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:04:45.520444   26786 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:04:45.520457   26786 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:04:45.520471   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:04:45.520485   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:04:45.520494   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:04:45.520504   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:04:45.520520   26786 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:04:45.520534   26786 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:04:45.520547   26786 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:04:45.520562   26786 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:04:45.520573   26786 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:04:45.520582   26786 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:04:45.520587   26786 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:04:45.520594   26786 command_runner.go:130] > runtime_type = "oci"
	I1205 20:04:45.520602   26786 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:04:45.520612   26786 command_runner.go:130] > runtime_config_path = ""
	I1205 20:04:45.520620   26786 command_runner.go:130] > monitor_path = ""
	I1205 20:04:45.520631   26786 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:04:45.520641   26786 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:04:45.520652   26786 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:04:45.520662   26786 command_runner.go:130] > # running containers
	I1205 20:04:45.520673   26786 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:04:45.520684   26786 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:04:45.520732   26786 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:04:45.520745   26786 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:04:45.520754   26786 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:04:45.520766   26786 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:04:45.520777   26786 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:04:45.520788   26786 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:04:45.520799   26786 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:04:45.520807   26786 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:04:45.520817   26786 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:04:45.520824   26786 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:04:45.520837   26786 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:04:45.520853   26786 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:04:45.520869   26786 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:04:45.520882   26786 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:04:45.520902   26786 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:04:45.520916   26786 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:04:45.520926   26786 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:04:45.520938   26786 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:04:45.520948   26786 command_runner.go:130] > # Example:
	I1205 20:04:45.520958   26786 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:04:45.520970   26786 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:04:45.520981   26786 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:04:45.520993   26786 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:04:45.521003   26786 command_runner.go:130] > # cpuset = 0
	I1205 20:04:45.521010   26786 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:04:45.521019   26786 command_runner.go:130] > # Where:
	I1205 20:04:45.521024   26786 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:04:45.521035   26786 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:04:45.521048   26786 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:04:45.521061   26786 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:04:45.521076   26786 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:04:45.521089   26786 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:04:45.521098   26786 command_runner.go:130] > # 
	I1205 20:04:45.521107   26786 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:04:45.521113   26786 command_runner.go:130] > #
	I1205 20:04:45.521122   26786 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:04:45.521140   26786 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:04:45.521154   26786 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:04:45.521167   26786 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:04:45.521180   26786 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:04:45.521189   26786 command_runner.go:130] > [crio.image]
	I1205 20:04:45.521201   26786 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:04:45.521209   26786 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:04:45.521218   26786 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:04:45.521232   26786 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:04:45.521240   26786 command_runner.go:130] > # global_auth_file = ""
	I1205 20:04:45.521256   26786 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:04:45.521268   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:45.521280   26786 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:04:45.521293   26786 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:04:45.521305   26786 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:04:45.521312   26786 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:04:45.521320   26786 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:04:45.521333   26786 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:04:45.521344   26786 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:04:45.521358   26786 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:04:45.521371   26786 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:04:45.521381   26786 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:04:45.521394   26786 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:04:45.521406   26786 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:04:45.521413   26786 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:04:45.521426   26786 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:04:45.521439   26786 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:04:45.521447   26786 command_runner.go:130] > # signature_policy = ""
	I1205 20:04:45.521460   26786 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:04:45.521473   26786 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:04:45.521483   26786 command_runner.go:130] > # changing them here.
	I1205 20:04:45.521493   26786 command_runner.go:130] > # insecure_registries = [
	I1205 20:04:45.521502   26786 command_runner.go:130] > # ]
	I1205 20:04:45.521529   26786 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:04:45.521542   26786 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:04:45.521549   26786 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:04:45.521561   26786 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:04:45.521572   26786 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:04:45.521585   26786 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:04:45.521592   26786 command_runner.go:130] > # CNI plugins.
	I1205 20:04:45.521601   26786 command_runner.go:130] > [crio.network]
	I1205 20:04:45.521609   26786 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:04:45.521618   26786 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:04:45.521626   26786 command_runner.go:130] > # cni_default_network = ""
	I1205 20:04:45.521639   26786 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:04:45.521649   26786 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:04:45.521662   26786 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:04:45.521672   26786 command_runner.go:130] > # plugin_dirs = [
	I1205 20:04:45.521681   26786 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:04:45.521690   26786 command_runner.go:130] > # ]
	I1205 20:04:45.521700   26786 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:04:45.521707   26786 command_runner.go:130] > [crio.metrics]
	I1205 20:04:45.521712   26786 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:04:45.521722   26786 command_runner.go:130] > enable_metrics = true
	I1205 20:04:45.521730   26786 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:04:45.521741   26786 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:04:45.521753   26786 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:04:45.521767   26786 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:04:45.521780   26786 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:04:45.521790   26786 command_runner.go:130] > # metrics_collectors = [
	I1205 20:04:45.521799   26786 command_runner.go:130] > # 	"operations",
	I1205 20:04:45.521810   26786 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:04:45.521817   26786 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:04:45.521822   26786 command_runner.go:130] > # 	"operations_errors",
	I1205 20:04:45.521831   26786 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:04:45.521844   26786 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:04:45.521852   26786 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:04:45.521863   26786 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:04:45.521871   26786 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:04:45.521881   26786 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:04:45.521888   26786 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:04:45.521898   26786 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:04:45.521907   26786 command_runner.go:130] > # 	"containers_oom",
	I1205 20:04:45.521915   26786 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:04:45.521919   26786 command_runner.go:130] > # 	"operations_total",
	I1205 20:04:45.521930   26786 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:04:45.521942   26786 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:04:45.521950   26786 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:04:45.521961   26786 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:04:45.521972   26786 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:04:45.521979   26786 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:04:45.521990   26786 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:04:45.521997   26786 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:04:45.522007   26786 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:04:45.522011   26786 command_runner.go:130] > # ]
	I1205 20:04:45.522016   26786 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:04:45.522027   26786 command_runner.go:130] > # metrics_port = 9090
	I1205 20:04:45.522036   26786 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:04:45.522046   26786 command_runner.go:130] > # metrics_socket = ""
	I1205 20:04:45.522055   26786 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:04:45.522069   26786 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:04:45.522082   26786 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:04:45.522092   26786 command_runner.go:130] > # certificate on any modification event.
	I1205 20:04:45.522102   26786 command_runner.go:130] > # metrics_cert = ""
	I1205 20:04:45.522112   26786 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:04:45.522120   26786 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:04:45.522125   26786 command_runner.go:130] > # metrics_key = ""
	I1205 20:04:45.522138   26786 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:04:45.522149   26786 command_runner.go:130] > [crio.tracing]
	I1205 20:04:45.522158   26786 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:04:45.522169   26786 command_runner.go:130] > # enable_tracing = false
	I1205 20:04:45.522179   26786 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:04:45.522189   26786 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:04:45.522201   26786 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:04:45.522210   26786 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:04:45.522220   26786 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:04:45.522224   26786 command_runner.go:130] > [crio.stats]
	I1205 20:04:45.522232   26786 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:04:45.522239   26786 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:04:45.522248   26786 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:04:45.522348   26786 command_runner.go:130] ! time="2023-12-05 20:04:45.501292942Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1205 20:04:45.522371   26786 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:04:45.522487   26786 cni.go:84] Creating CNI manager for ""
	I1205 20:04:45.522506   26786 cni.go:136] 2 nodes found, recommending kindnet
	I1205 20:04:45.522516   26786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:04:45.522541   26786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-558947 NodeName:multinode-558947-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:04:45.522674   26786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-558947-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:04:45.522729   26786 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-558947-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:04:45.522777   26786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:04:45.532967   26786 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1205 20:04:45.533074   26786 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1205 20:04:45.533136   26786 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1205 20:04:45.542898   26786 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1205 20:04:45.542924   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1205 20:04:45.542993   26786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1205 20:04:45.543022   26786 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1205 20:04:45.543059   26786 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1205 20:04:45.547054   26786 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1205 20:04:45.550366   26786 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1205 20:04:45.550393   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1205 20:04:46.098077   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1205 20:04:46.098232   26786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1205 20:04:46.103655   26786 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1205 20:04:46.103741   26786 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1205 20:04:46.103781   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1205 20:04:46.642660   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:46.658036   26786 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1205 20:04:46.658150   26786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1205 20:04:46.662426   26786 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1205 20:04:46.662536   26786 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1205 20:04:46.662567   26786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1205 20:04:47.171492   26786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1205 20:04:47.180718   26786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1205 20:04:47.197119   26786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:04:47.212448   26786 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I1205 20:04:47.216204   26786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:04:47.227283   26786 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:04:47.227514   26786 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:04:47.227623   26786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:04:47.227667   26786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:04:47.245290   26786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I1205 20:04:47.245676   26786 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:04:47.246128   26786 main.go:141] libmachine: Using API Version  1
	I1205 20:04:47.246152   26786 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:04:47.246470   26786 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:04:47.246673   26786 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:04:47.246811   26786 start.go:304] JoinCluster: &{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:04:47.246922   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:04:47.246937   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:04:47.249630   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:04:47.249994   26786 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:04:47.250026   26786 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:04:47.250170   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:04:47.250314   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:04:47.250455   26786 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:04:47.250547   26786 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:04:47.423032   26786 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 4x6lms.lfio7g7z57dkehpp --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:04:47.425295   26786 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:04:47.425340   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4x6lms.lfio7g7z57dkehpp --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-558947-m02"
	I1205 20:04:47.468754   26786 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:04:47.611931   26786 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1205 20:04:47.611961   26786 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1205 20:04:47.650071   26786 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:04:47.650104   26786 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:04:47.650113   26786 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:04:47.780833   26786 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1205 20:04:49.822758   26786 command_runner.go:130] > This node has joined the cluster:
	I1205 20:04:49.822783   26786 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1205 20:04:49.822793   26786 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1205 20:04:49.822803   26786 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1205 20:04:49.824771   26786 command_runner.go:130] ! W1205 20:04:47.456910     820 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1205 20:04:49.824796   26786 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:04:49.825032   26786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4x6lms.lfio7g7z57dkehpp --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-558947-m02": (2.399676086s)
	I1205 20:04:49.825056   26786 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:04:50.066455   26786 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1205 20:04:50.066560   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-558947 minikube.k8s.io/updated_at=2023_12_05T20_04_50_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:04:50.214375   26786 command_runner.go:130] > node/multinode-558947-m02 labeled
	I1205 20:04:50.215809   26786 start.go:306] JoinCluster complete in 2.968997847s
	I1205 20:04:50.215833   26786 cni.go:84] Creating CNI manager for ""
	I1205 20:04:50.215839   26786 cni.go:136] 2 nodes found, recommending kindnet
	I1205 20:04:50.215882   26786 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:04:50.221043   26786 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:04:50.221075   26786 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1205 20:04:50.221086   26786 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1205 20:04:50.221096   26786 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:04:50.221103   26786 command_runner.go:130] > Access: 2023-12-05 20:03:24.065701117 +0000
	I1205 20:04:50.221114   26786 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1205 20:04:50.221125   26786 command_runner.go:130] > Change: 2023-12-05 20:03:22.189701117 +0000
	I1205 20:04:50.221135   26786 command_runner.go:130] >  Birth: -
	I1205 20:04:50.221177   26786 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:04:50.221192   26786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:04:50.237696   26786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:04:50.558871   26786 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:04:50.563550   26786 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:04:50.566861   26786 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1205 20:04:50.579790   26786 command_runner.go:130] > daemonset.apps/kindnet configured
	I1205 20:04:50.582868   26786 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:04:50.583163   26786 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:50.583447   26786 round_trippers.go:463] GET https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:04:50.583466   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:50.583476   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:50.583485   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:50.586066   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:50.586084   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:50.586093   26786 round_trippers.go:580]     Audit-Id: 102cc6e8-f7cf-40f5-abb2-e41325ae8e7b
	I1205 20:04:50.586100   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:50.586108   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:50.586117   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:50.586129   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:50.586139   26786 round_trippers.go:580]     Content-Length: 291
	I1205 20:04:50.586152   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:50 GMT
	I1205 20:04:50.586179   26786 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"453","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1205 20:04:50.586287   26786 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-558947" context rescaled to 1 replicas
	I1205 20:04:50.586321   26786 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:04:50.588226   26786 out.go:177] * Verifying Kubernetes components...
	I1205 20:04:50.589793   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:50.606832   26786 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:04:50.607060   26786 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:04:50.607286   26786 node_ready.go:35] waiting up to 6m0s for node "multinode-558947-m02" to be "Ready" ...
	I1205 20:04:50.607363   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:50.607372   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:50.607384   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:50.607394   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:50.610298   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:50.610319   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:50.610329   26786 round_trippers.go:580]     Audit-Id: 26dc18f0-9940-4695-b636-de1c75f11d8c
	I1205 20:04:50.610341   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:50.610352   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:50.610362   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:50.610372   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:50.610381   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:50.610390   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:50 GMT
	I1205 20:04:50.610527   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:50.610809   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:50.610822   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:50.610832   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:50.610840   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:50.614102   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:50.614128   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:50.614136   26786 round_trippers.go:580]     Audit-Id: 2df6b3b7-97df-405d-a1c4-25542f933ad3
	I1205 20:04:50.614142   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:50.614147   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:50.614153   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:50.614158   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:50.614163   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:50.614168   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:50 GMT
	I1205 20:04:50.614238   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:51.115540   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:51.115561   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:51.115569   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:51.115575   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:51.121015   26786 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:04:51.121037   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:51.121045   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:51 GMT
	I1205 20:04:51.121053   26786 round_trippers.go:580]     Audit-Id: f573d1d6-5a20-4c32-a664-b64fb2ab6913
	I1205 20:04:51.121062   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:51.121070   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:51.121078   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:51.121085   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:51.121091   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:51.121148   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:51.615544   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:51.615566   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:51.615575   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:51.615581   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:51.618585   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:51.618607   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:51.618614   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:51.618620   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:51 GMT
	I1205 20:04:51.618626   26786 round_trippers.go:580]     Audit-Id: 3d2fc237-98b8-4c70-adc6-2d01fb60eb54
	I1205 20:04:51.618631   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:51.618640   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:51.618648   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:51.618657   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:51.618853   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:52.115599   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:52.115632   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:52.115644   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:52.115654   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:52.119237   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:52.119265   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:52.119275   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:52 GMT
	I1205 20:04:52.119284   26786 round_trippers.go:580]     Audit-Id: f8922019-2bc8-43b7-b453-f58b1b1c3725
	I1205 20:04:52.119292   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:52.119300   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:52.119308   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:52.119317   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:52.119326   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:52.119379   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:52.615529   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:52.615582   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:52.615590   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:52.615596   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:52.619716   26786 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:52.619747   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:52.619758   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:52.619768   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:52 GMT
	I1205 20:04:52.619777   26786 round_trippers.go:580]     Audit-Id: 4d7434bd-0935-4b49-8414-c7a3f98427f8
	I1205 20:04:52.619785   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:52.619795   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:52.619809   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:52.619821   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:52.619995   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:52.620282   26786 node_ready.go:58] node "multinode-558947-m02" has status "Ready":"False"
	I1205 20:04:53.114617   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:53.114650   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:53.114659   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:53.114665   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:53.117839   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:53.117866   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:53.117873   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:53.117893   26786 round_trippers.go:580]     Content-Length: 4082
	I1205 20:04:53.117898   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:53 GMT
	I1205 20:04:53.117903   26786 round_trippers.go:580]     Audit-Id: 9c9d88ab-44d6-45c4-a160-33f6af5be986
	I1205 20:04:53.117908   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:53.117914   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:53.117919   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:53.117997   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"506","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3058 chars]
	I1205 20:04:53.615549   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:53.615571   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:53.615579   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:53.615585   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:53.618153   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:53.618180   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:53.618190   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:53 GMT
	I1205 20:04:53.618199   26786 round_trippers.go:580]     Audit-Id: 2e942201-cf67-4606-a82a-04f52ceb676a
	I1205 20:04:53.618207   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:53.618215   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:53.618231   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:53.618239   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:53.618596   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:54.114701   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:54.114727   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:54.114735   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:54.114741   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:54.117666   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:54.117697   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:54.117708   26786 round_trippers.go:580]     Audit-Id: 0eb209bf-5bab-47dd-b9b4-b33d3ae9358c
	I1205 20:04:54.117716   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:54.117724   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:54.117731   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:54.117739   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:54.117747   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:54 GMT
	I1205 20:04:54.118533   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:54.614865   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:54.614898   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:54.614912   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:54.614923   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:54.618472   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:54.618500   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:54.618510   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:54.618519   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:54 GMT
	I1205 20:04:54.618527   26786 round_trippers.go:580]     Audit-Id: df84c325-3fef-4b30-a2f5-98bf7eda3d00
	I1205 20:04:54.618535   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:54.618543   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:54.618551   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:54.618965   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:55.115575   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:55.115609   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:55.115621   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:55.115630   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:55.119083   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:55.119113   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:55.119121   26786 round_trippers.go:580]     Audit-Id: 751cf919-e2cd-465b-92cc-c89581642838
	I1205 20:04:55.119127   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:55.119134   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:55.119142   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:55.119150   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:55.119159   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:55 GMT
	I1205 20:04:55.119551   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:55.119832   26786 node_ready.go:58] node "multinode-558947-m02" has status "Ready":"False"
	I1205 20:04:55.615335   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:55.615362   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:55.615371   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:55.615378   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:55.618814   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:55.618840   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:55.618850   26786 round_trippers.go:580]     Audit-Id: 1d0393e2-3f43-4ec0-b12c-4a7dbe5a7a33
	I1205 20:04:55.618858   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:55.618866   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:55.618885   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:55.618894   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:55.618902   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:55 GMT
	I1205 20:04:55.619399   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:56.114689   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:56.114714   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:56.114724   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:56.114733   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:56.117718   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:56.117735   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:56.117742   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:56.117747   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:56.117753   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:56.117758   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:56 GMT
	I1205 20:04:56.117763   26786 round_trippers.go:580]     Audit-Id: 23c995ca-86d0-43ee-9e89-f42293bd5ffb
	I1205 20:04:56.117773   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:56.118449   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:56.614730   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:56.614754   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:56.614762   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:56.614768   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:56.617443   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:56.617463   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:56.617471   26786 round_trippers.go:580]     Audit-Id: f84cbae7-36c7-42db-95d9-5c64eecbb723
	I1205 20:04:56.617479   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:56.617487   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:56.617495   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:56.617507   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:56.617524   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:56 GMT
	I1205 20:04:56.617681   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:57.115414   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:57.115443   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.115451   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.115458   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.119986   26786 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:57.120005   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.120011   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.120017   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.120025   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.120036   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.120049   26786 round_trippers.go:580]     Audit-Id: d071e6d6-b254-4824-a765-fcf029314c65
	I1205 20:04:57.120059   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.120208   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"510","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3167 chars]
	I1205 20:04:57.120472   26786 node_ready.go:58] node "multinode-558947-m02" has status "Ready":"False"
	I1205 20:04:57.615520   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:57.615547   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.615560   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.615570   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.618707   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:57.618735   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.618758   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.618766   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.618777   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.618788   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.618799   26786 round_trippers.go:580]     Audit-Id: 15313fdd-179f-4a74-8e7e-eaf3e5b44f29
	I1205 20:04:57.618810   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.619206   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"531","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I1205 20:04:57.619474   26786 node_ready.go:49] node "multinode-558947-m02" has status "Ready":"True"
	I1205 20:04:57.619489   26786 node_ready.go:38] duration metric: took 7.012188953s waiting for node "multinode-558947-m02" to be "Ready" ...
	I1205 20:04:57.619500   26786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:04:57.619557   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:04:57.619566   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.619573   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.619579   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.623917   26786 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:04:57.623940   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.623950   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.623972   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.623980   26786 round_trippers.go:580]     Audit-Id: db3a1327-23bf-4b21-b657-fa0d25f56b9d
	I1205 20:04:57.623985   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.623990   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.623998   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.626048   26786 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"531"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"449","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67284 chars]
	I1205 20:04:57.628346   26786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.628414   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:04:57.628423   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.628430   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.628436   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.634733   26786 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:04:57.634761   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.634768   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.634773   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.634779   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.634784   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.634789   26786 round_trippers.go:580]     Audit-Id: 45e86fc7-881e-4309-a70a-95d3f7392686
	I1205 20:04:57.634797   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.634970   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"449","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:04:57.636008   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:57.636029   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.636041   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.636051   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.639275   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:57.639293   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.639300   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.639305   26786 round_trippers.go:580]     Audit-Id: c7c3d1c7-1511-4232-a998-bbef2beb87ad
	I1205 20:04:57.639311   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.639316   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.639321   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.639326   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.639471   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:57.639783   26786 pod_ready.go:92] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:57.639797   26786 pod_ready.go:81] duration metric: took 11.430883ms waiting for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.639805   26786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.639848   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-558947
	I1205 20:04:57.639856   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.639862   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.639868   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.641894   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:57.641909   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.641914   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.641919   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.641924   26786 round_trippers.go:580]     Audit-Id: 7ad964dc-7fe3-4c75-af78-f55fe35e9716
	I1205 20:04:57.641929   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.641934   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.641939   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.642251   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"438","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:04:57.642574   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:57.642586   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.642592   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.642598   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.644434   26786 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:57.644449   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.644454   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.644460   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.644465   26786 round_trippers.go:580]     Audit-Id: daf3ba4c-dcdc-4b76-bff1-19774b71b66e
	I1205 20:04:57.644471   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.644476   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.644481   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.644672   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:57.644918   26786 pod_ready.go:92] pod "etcd-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:57.644930   26786 pod_ready.go:81] duration metric: took 5.120371ms waiting for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.644944   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.644989   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-558947
	I1205 20:04:57.644996   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.645002   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.645008   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.647899   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:57.647913   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.647918   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.647924   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.647929   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.647935   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.647941   26786 round_trippers.go:580]     Audit-Id: 86ecd6cb-1d46-4d28-8852-c1f61c55835f
	I1205 20:04:57.647953   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.648163   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-558947","namespace":"kube-system","uid":"36300192-b165-4bee-b791-9fce329428f9","resourceVersion":"440","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.3:8443","kubernetes.io/config.hash":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.mirror":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.seen":"2023-12-05T20:03:56.146037812Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7371 chars]
	I1205 20:04:57.648470   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:57.648481   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.648487   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.648493   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.650305   26786 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:57.650321   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.650327   26786 round_trippers.go:580]     Audit-Id: e529ff66-7c64-4893-86dc-b5a15966ba33
	I1205 20:04:57.650333   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.650340   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.650345   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.650350   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.650356   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.650581   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:57.650824   26786 pod_ready.go:92] pod "kube-apiserver-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:57.650835   26786 pod_ready.go:81] duration metric: took 5.879741ms waiting for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.650842   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.650876   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-558947
	I1205 20:04:57.650883   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.650890   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.650896   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.653048   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:57.653064   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.653070   26786 round_trippers.go:580]     Audit-Id: d24386e1-5b8a-4732-82c5-0dd2d4ea0176
	I1205 20:04:57.653075   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.653080   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.653085   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.653093   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.653099   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.653237   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-558947","namespace":"kube-system","uid":"49ee6fa8-b7cd-4880-b4db-a1717b685750","resourceVersion":"439","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.mirror":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.seen":"2023-12-05T20:03:56.146038937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6946 chars]
	I1205 20:04:57.653534   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:57.653544   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.653551   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.653556   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.655525   26786 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:04:57.655544   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.655551   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.655556   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.655561   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.655566   26786 round_trippers.go:580]     Audit-Id: acc23516-e57a-45ca-ab02-69e44a741aaa
	I1205 20:04:57.655572   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.655577   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.655712   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:57.655971   26786 pod_ready.go:92] pod "kube-controller-manager-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:57.655985   26786 pod_ready.go:81] duration metric: took 5.137133ms waiting for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.655993   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:57.816385   26786 request.go:629] Waited for 160.335826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:04:57.816455   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:04:57.816462   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:57.816469   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:57.816476   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:57.819412   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:57.819435   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:57.819445   26786 round_trippers.go:580]     Audit-Id: afef7b2e-72fe-4bbe-99f9-f860d58e32a6
	I1205 20:04:57.819454   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:57.819461   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:57.819468   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:57.819476   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:57.819483   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:57 GMT
	I1205 20:04:57.819795   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"517","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1205 20:04:58.015532   26786 request.go:629] Waited for 195.326888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:58.015601   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:04:58.015606   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:58.015614   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:58.015620   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:58.019233   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:58.019258   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:58.019268   26786 round_trippers.go:580]     Audit-Id: 8adddaa4-8508-40b3-8d34-7d1ff5b096c9
	I1205 20:04:58.019274   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:58.019279   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:58.019284   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:58.019289   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:58.019295   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:58 GMT
	I1205 20:04:58.019864   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"531","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_04_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I1205 20:04:58.020119   26786 pod_ready.go:92] pod "kube-proxy-kjph8" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:58.020132   26786 pod_ready.go:81] duration metric: took 364.13447ms waiting for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:58.020142   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:58.216335   26786 request.go:629] Waited for 196.131323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:04:58.216401   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:04:58.216413   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:58.216427   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:58.216454   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:58.219589   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:58.219615   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:58.219636   26786 round_trippers.go:580]     Audit-Id: a663f434-294d-42a5-b6ee-93642ba6de27
	I1205 20:04:58.219643   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:58.219649   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:58.219656   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:58.219664   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:58.219672   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:58 GMT
	I1205 20:04:58.219846   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgmt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"41275cfd-cb0f-4886-b1bc-a86b7e20cc14","resourceVersion":"412","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:04:58.416380   26786 request.go:629] Waited for 196.087355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:58.416436   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:58.416449   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:58.416462   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:58.416476   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:58.419220   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:58.419239   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:58.419255   26786 round_trippers.go:580]     Audit-Id: e762e04d-f746-418b-8531-2bc71a6ac1b8
	I1205 20:04:58.419264   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:58.419273   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:58.419283   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:58.419292   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:58.419306   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:58 GMT
	I1205 20:04:58.419883   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:58.420211   26786 pod_ready.go:92] pod "kube-proxy-mgmt2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:58.420226   26786 pod_ready.go:81] duration metric: took 400.079493ms waiting for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:58.420235   26786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:58.615615   26786 request.go:629] Waited for 195.327016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:04:58.615700   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:04:58.615708   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:58.615722   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:58.615735   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:58.618512   26786 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:04:58.618536   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:58.618543   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:58 GMT
	I1205 20:04:58.618548   26786 round_trippers.go:580]     Audit-Id: bf3598c7-c214-4e8c-ac80-d5bd0692000b
	I1205 20:04:58.618553   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:58.618558   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:58.618563   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:58.618569   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:58.618721   26786 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-558947","namespace":"kube-system","uid":"526e311f-432f-4c9a-ad6e-19855cae55be","resourceVersion":"437","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.mirror":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.seen":"2023-12-05T20:03:56.146039635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:04:58.816468   26786 request.go:629] Waited for 197.366905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:58.816536   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:04:58.816544   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:58.816554   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:58.816564   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:58.820536   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:58.820560   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:58.820570   26786 round_trippers.go:580]     Audit-Id: 9239e823-7971-404a-95b2-51ae04fcdf97
	I1205 20:04:58.820577   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:58.820588   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:58.820596   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:58.820608   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:58.820618   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:58 GMT
	I1205 20:04:58.820747   26786 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5897 chars]
	I1205 20:04:58.821057   26786 pod_ready.go:92] pod "kube-scheduler-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:04:58.821074   26786 pod_ready.go:81] duration metric: took 400.832769ms waiting for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:04:58.821087   26786 pod_ready.go:38] duration metric: took 1.201575144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:04:58.821116   26786 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:04:58.821169   26786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:04:58.833871   26786 system_svc.go:56] duration metric: took 12.751383ms WaitForService to wait for kubelet.
	I1205 20:04:58.833894   26786 kubeadm.go:581] duration metric: took 8.247545369s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:04:58.833913   26786 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:04:59.015600   26786 request.go:629] Waited for 181.633783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I1205 20:04:59.015652   26786 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I1205 20:04:59.015657   26786 round_trippers.go:469] Request Headers:
	I1205 20:04:59.015664   26786 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:04:59.015670   26786 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:04:59.019644   26786 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:04:59.019669   26786 round_trippers.go:577] Response Headers:
	I1205 20:04:59.019678   26786 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:04:59.019685   26786 round_trippers.go:580]     Content-Type: application/json
	I1205 20:04:59.019693   26786 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:04:59.019700   26786 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:04:59.019706   26786 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:04:59 GMT
	I1205 20:04:59.019713   26786 round_trippers.go:580]     Audit-Id: c4a32629-f263-40a4-81a8-5c4aaba3e284
	I1205 20:04:59.020556   26786 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"422","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10075 chars]
	I1205 20:04:59.020985   26786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:04:59.021003   26786 node_conditions.go:123] node cpu capacity is 2
	I1205 20:04:59.021015   26786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:04:59.021021   26786 node_conditions.go:123] node cpu capacity is 2
	I1205 20:04:59.021027   26786 node_conditions.go:105] duration metric: took 187.109057ms to run NodePressure ...
	I1205 20:04:59.021044   26786 start.go:228] waiting for startup goroutines ...
	I1205 20:04:59.021071   26786 start.go:242] writing updated cluster config ...
	I1205 20:04:59.021354   26786 ssh_runner.go:195] Run: rm -f paused
	I1205 20:04:59.066755   26786 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:04:59.069592   26786 out.go:177] * Done! kubectl is now configured to use "multinode-558947" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:03:22 UTC, ends at Tue 2023-12-05 20:05:05 UTC. --
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.841436827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806705841425406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aa018703-7b0e-46c7-be10-542ad6f6b484 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.842030417Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fc2ae04-485d-4c89-a807-ba696bbee373 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.842139801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fc2ae04-485d-4c89-a807-ba696bbee373 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.842411834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317f294c46eba3ef43c9fad0e69038cc51fba7f17c7bae8cd722dad165b52a1d,PodSandboxId:cad379f5ff19b3c99d8f844ac19bdf9b5491b80b5d1dc29dcbc998f4c59c296e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701806701656023823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1dd62c6437428f97e70d5d9e0d815fbc57a207257c97190e999fb29415233a,PodSandboxId:9b5a53749cf2cd943102000f3819b073fd4b4259b7a9564466c502542a4bab76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701806656661939617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a30d7284f90aa9489b4994f49ecf599bd7e260e7a36d0c1a5aebdd569744d8,PodSandboxId:7d8246963d10918631f92ac6bb71e1cea4d5cf644d33d0aa81d899a31382841e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806654888450159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b06a7b92b5ab6dd8328ea51dc86a09c97b277daf318ad44e71a8a30522340,PodSandboxId:650fba29f1d9f58c93301e9cb05117d9dd18b4b667b144aade037769c08ed244,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701806652474687978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c285a46a1433ed81517bbf33e3e08f17066a54a18d3ecb925e5b190cbaa5892e,PodSandboxId:c09c79b6f1508e05a6e1c83f9c66305677a1fc9d9572fb8346f07562568b0db1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701806649941326521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a76b04780fffc7578f56fed9ef527064f997e25807fd640fe5a0a145bd19ac,PodSandboxId:81b1d1a5f1f0157325abf8f1e8973d373b722478e5214883be35486f45972db0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701806629099896997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.
container.hash: 24713faf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e725dcc58dc42661e847c544f5f35c25cae5cb452bf900a732a743ad2a333254,PodSandboxId:1cbe788a7cff571d671699e8f0bc681d54fc1ef6f30e0d964880f06a6b54c355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701806628853587132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260c7faea8fbc7583818af2f0e31c0f0a950041197977ea546f11078c219fbab,PodSandboxId:1a5c614d2196f3b0691f66001eaa15daa0a2626262404d2fbe2e57d745f46a72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701806628809112866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f,PodSandboxId:3d66da5083821ba50e7e7970788de88f17fc2eafae3f45c0b4f6aa76fbe14c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701806628476211740,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.
container.hash: 4d16373e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fc2ae04-485d-4c89-a807-ba696bbee373 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.881893440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b12b5600-f230-40a6-be2a-ac963917e2a7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.881955159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b12b5600-f230-40a6-be2a-ac963917e2a7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.883548950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b204c798-a63b-4d82-b055-c49a73e1cace name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.884004674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806705883987377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b204c798-a63b-4d82-b055-c49a73e1cace name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.884790324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67095a1d-6f35-4d68-8bcc-36df33a701f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.884836607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67095a1d-6f35-4d68-8bcc-36df33a701f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.885059856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317f294c46eba3ef43c9fad0e69038cc51fba7f17c7bae8cd722dad165b52a1d,PodSandboxId:cad379f5ff19b3c99d8f844ac19bdf9b5491b80b5d1dc29dcbc998f4c59c296e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701806701656023823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1dd62c6437428f97e70d5d9e0d815fbc57a207257c97190e999fb29415233a,PodSandboxId:9b5a53749cf2cd943102000f3819b073fd4b4259b7a9564466c502542a4bab76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701806656661939617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a30d7284f90aa9489b4994f49ecf599bd7e260e7a36d0c1a5aebdd569744d8,PodSandboxId:7d8246963d10918631f92ac6bb71e1cea4d5cf644d33d0aa81d899a31382841e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806654888450159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b06a7b92b5ab6dd8328ea51dc86a09c97b277daf318ad44e71a8a30522340,PodSandboxId:650fba29f1d9f58c93301e9cb05117d9dd18b4b667b144aade037769c08ed244,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701806652474687978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c285a46a1433ed81517bbf33e3e08f17066a54a18d3ecb925e5b190cbaa5892e,PodSandboxId:c09c79b6f1508e05a6e1c83f9c66305677a1fc9d9572fb8346f07562568b0db1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701806649941326521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a76b04780fffc7578f56fed9ef527064f997e25807fd640fe5a0a145bd19ac,PodSandboxId:81b1d1a5f1f0157325abf8f1e8973d373b722478e5214883be35486f45972db0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701806629099896997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.
container.hash: 24713faf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e725dcc58dc42661e847c544f5f35c25cae5cb452bf900a732a743ad2a333254,PodSandboxId:1cbe788a7cff571d671699e8f0bc681d54fc1ef6f30e0d964880f06a6b54c355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701806628853587132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260c7faea8fbc7583818af2f0e31c0f0a950041197977ea546f11078c219fbab,PodSandboxId:1a5c614d2196f3b0691f66001eaa15daa0a2626262404d2fbe2e57d745f46a72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701806628809112866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f,PodSandboxId:3d66da5083821ba50e7e7970788de88f17fc2eafae3f45c0b4f6aa76fbe14c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701806628476211740,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.
container.hash: 4d16373e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67095a1d-6f35-4d68-8bcc-36df33a701f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.925409995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=14332ffe-1e06-4e26-acc8-7f00e0153177 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.925465564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=14332ffe-1e06-4e26-acc8-7f00e0153177 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.926505971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a25cb7ee-1f19-4650-95e0-ac516892160c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.927001841Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806705926985645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a25cb7ee-1f19-4650-95e0-ac516892160c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.927487363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d358b2c1-69fd-41a3-91d3-422e97ab68d0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.927557209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d358b2c1-69fd-41a3-91d3-422e97ab68d0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.927850573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317f294c46eba3ef43c9fad0e69038cc51fba7f17c7bae8cd722dad165b52a1d,PodSandboxId:cad379f5ff19b3c99d8f844ac19bdf9b5491b80b5d1dc29dcbc998f4c59c296e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701806701656023823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1dd62c6437428f97e70d5d9e0d815fbc57a207257c97190e999fb29415233a,PodSandboxId:9b5a53749cf2cd943102000f3819b073fd4b4259b7a9564466c502542a4bab76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701806656661939617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a30d7284f90aa9489b4994f49ecf599bd7e260e7a36d0c1a5aebdd569744d8,PodSandboxId:7d8246963d10918631f92ac6bb71e1cea4d5cf644d33d0aa81d899a31382841e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806654888450159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b06a7b92b5ab6dd8328ea51dc86a09c97b277daf318ad44e71a8a30522340,PodSandboxId:650fba29f1d9f58c93301e9cb05117d9dd18b4b667b144aade037769c08ed244,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701806652474687978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c285a46a1433ed81517bbf33e3e08f17066a54a18d3ecb925e5b190cbaa5892e,PodSandboxId:c09c79b6f1508e05a6e1c83f9c66305677a1fc9d9572fb8346f07562568b0db1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701806649941326521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a76b04780fffc7578f56fed9ef527064f997e25807fd640fe5a0a145bd19ac,PodSandboxId:81b1d1a5f1f0157325abf8f1e8973d373b722478e5214883be35486f45972db0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701806629099896997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.
container.hash: 24713faf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e725dcc58dc42661e847c544f5f35c25cae5cb452bf900a732a743ad2a333254,PodSandboxId:1cbe788a7cff571d671699e8f0bc681d54fc1ef6f30e0d964880f06a6b54c355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701806628853587132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260c7faea8fbc7583818af2f0e31c0f0a950041197977ea546f11078c219fbab,PodSandboxId:1a5c614d2196f3b0691f66001eaa15daa0a2626262404d2fbe2e57d745f46a72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701806628809112866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f,PodSandboxId:3d66da5083821ba50e7e7970788de88f17fc2eafae3f45c0b4f6aa76fbe14c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701806628476211740,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.
container.hash: 4d16373e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d358b2c1-69fd-41a3-91d3-422e97ab68d0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.972400949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e39dd9e5-e667-4faf-bbd1-b269c0aa41a3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.972499226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e39dd9e5-e667-4faf-bbd1-b269c0aa41a3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.975483369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=44ba2948-0ba7-4e02-b920-7d7b608520c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.976003895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701806705975980314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=44ba2948-0ba7-4e02-b920-7d7b608520c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.978701196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cf2e0784-e622-4764-ba0c-8ec70e6d3217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.978890595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cf2e0784-e622-4764-ba0c-8ec70e6d3217 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:05:05 multinode-558947 crio[715]: time="2023-12-05 20:05:05.979310518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317f294c46eba3ef43c9fad0e69038cc51fba7f17c7bae8cd722dad165b52a1d,PodSandboxId:cad379f5ff19b3c99d8f844ac19bdf9b5491b80b5d1dc29dcbc998f4c59c296e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701806701656023823,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1dd62c6437428f97e70d5d9e0d815fbc57a207257c97190e999fb29415233a,PodSandboxId:9b5a53749cf2cd943102000f3819b073fd4b4259b7a9564466c502542a4bab76,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701806656661939617,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a30d7284f90aa9489b4994f49ecf599bd7e260e7a36d0c1a5aebdd569744d8,PodSandboxId:7d8246963d10918631f92ac6bb71e1cea4d5cf644d33d0aa81d899a31382841e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701806654888450159,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a8b06a7b92b5ab6dd8328ea51dc86a09c97b277daf318ad44e71a8a30522340,PodSandboxId:650fba29f1d9f58c93301e9cb05117d9dd18b4b667b144aade037769c08ed244,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701806652474687978,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c285a46a1433ed81517bbf33e3e08f17066a54a18d3ecb925e5b190cbaa5892e,PodSandboxId:c09c79b6f1508e05a6e1c83f9c66305677a1fc9d9572fb8346f07562568b0db1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701806649941326521,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a76b04780fffc7578f56fed9ef527064f997e25807fd640fe5a0a145bd19ac,PodSandboxId:81b1d1a5f1f0157325abf8f1e8973d373b722478e5214883be35486f45972db0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701806629099896997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.
container.hash: 24713faf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e725dcc58dc42661e847c544f5f35c25cae5cb452bf900a732a743ad2a333254,PodSandboxId:1cbe788a7cff571d671699e8f0bc681d54fc1ef6f30e0d964880f06a6b54c355,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701806628853587132,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.ha
sh: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260c7faea8fbc7583818af2f0e31c0f0a950041197977ea546f11078c219fbab,PodSandboxId:1a5c614d2196f3b0691f66001eaa15daa0a2626262404d2fbe2e57d745f46a72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701806628809112866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io
.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f,PodSandboxId:3d66da5083821ba50e7e7970788de88f17fc2eafae3f45c0b4f6aa76fbe14c78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701806628476211740,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.
container.hash: 4d16373e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cf2e0784-e622-4764-ba0c-8ec70e6d3217 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	317f294c46eba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   cad379f5ff19b       busybox-5bc68d56bd-6www8
	4f1dd62c64374       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      49 seconds ago       Running             coredns                   0                   9b5a53749cf2c       coredns-5dd5756b68-knl4d
	88a30d7284f90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      51 seconds ago       Running             storage-provisioner       0                   7d8246963d109       storage-provisioner
	6a8b06a7b92b5       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      53 seconds ago       Running             kindnet-cni               0                   650fba29f1d9f       kindnet-cv76g
	c285a46a1433e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      56 seconds ago       Running             kube-proxy                0                   c09c79b6f1508       kube-proxy-mgmt2
	e1a76b04780ff       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   81b1d1a5f1f01       etcd-multinode-558947
	e725dcc58dc42       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   1cbe788a7cff5       kube-scheduler-multinode-558947
	260c7faea8fbc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   1a5c614d2196f       kube-controller-manager-multinode-558947
	73c0850b27ca0       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   3d66da5083821       kube-apiserver-multinode-558947
	
	* 
	* ==> coredns [4f1dd62c6437428f97e70d5d9e0d815fbc57a207257c97190e999fb29415233a] <==
	* [INFO] 10.244.0.3:38839 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115881s
	[INFO] 10.244.1.2:60249 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133625s
	[INFO] 10.244.1.2:57624 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001863024s
	[INFO] 10.244.1.2:36345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135943s
	[INFO] 10.244.1.2:44726 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000164195s
	[INFO] 10.244.1.2:59910 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00143237s
	[INFO] 10.244.1.2:41836 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118448s
	[INFO] 10.244.1.2:59418 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094567s
	[INFO] 10.244.1.2:55378 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000200132s
	[INFO] 10.244.0.3:47067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110305s
	[INFO] 10.244.0.3:43246 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012966s
	[INFO] 10.244.0.3:48640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098573s
	[INFO] 10.244.0.3:42583 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075031s
	[INFO] 10.244.1.2:37873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207288s
	[INFO] 10.244.1.2:39221 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114317s
	[INFO] 10.244.1.2:46608 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176323s
	[INFO] 10.244.1.2:50962 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111275s
	[INFO] 10.244.0.3:40497 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141008s
	[INFO] 10.244.0.3:57580 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211493s
	[INFO] 10.244.0.3:45089 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114245s
	[INFO] 10.244.0.3:51552 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097966s
	[INFO] 10.244.1.2:50141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138082s
	[INFO] 10.244.1.2:41522 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000302153s
	[INFO] 10.244.1.2:40048 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141916s
	[INFO] 10.244.1.2:41109 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000225118s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-558947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-558947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-558947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_03_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:03:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-558947
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:04:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:04:14 +0000   Tue, 05 Dec 2023 20:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:04:14 +0000   Tue, 05 Dec 2023 20:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:04:14 +0000   Tue, 05 Dec 2023 20:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:04:14 +0000   Tue, 05 Dec 2023 20:04:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    multinode-558947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dc05fa07b7a45888dae1cab7c1f644b
	  System UUID:                2dc05fa0-7b7a-4588-8dae-1cab7c1f644b
	  Boot ID:                    505ba4a0-7eca-4abf-91ee-42a6bda5f63b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6www8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-knl4d                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     57s
	  kube-system                 etcd-multinode-558947                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kindnet-cv76g                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-558947             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-multinode-558947    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-mgmt2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-558947             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node multinode-558947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node multinode-558947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-558947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node multinode-558947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node multinode-558947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node multinode-558947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                node-controller  Node multinode-558947 event: Registered Node multinode-558947 in Controller
	  Normal  NodeReady                52s                kubelet          Node multinode-558947 status is now: NodeReady
	
	
	Name:               multinode-558947-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-558947-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-558947
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_05T20_04_50_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:04:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-558947-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:04:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:04:57 +0000   Tue, 05 Dec 2023 20:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:04:57 +0000   Tue, 05 Dec 2023 20:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:04:57 +0000   Tue, 05 Dec 2023 20:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:04:57 +0000   Tue, 05 Dec 2023 20:04:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    multinode-558947-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 77915f8f899f4a91a25258a548ce6f37
	  System UUID:                77915f8f-899f-4a91-a252-58a548ce6f37
	  Boot ID:                    99d55947-ac81-4222-ac3e-9e89168a367c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-phsxm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-xcs7j               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-kjph8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 18s)  kubelet          Node multinode-558947-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 18s)  kubelet          Node multinode-558947-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 18s)  kubelet          Node multinode-558947-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node multinode-558947-m02 event: Registered Node multinode-558947-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-558947-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068444] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.393738] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.431263] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152360] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.047263] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.640924] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.100161] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.139563] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.101000] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.227470] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +10.179958] systemd-fstab-generator[922]: Ignoring "noauto" for root device
	[  +8.791841] systemd-fstab-generator[1258]: Ignoring "noauto" for root device
	[Dec 5 20:04] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [e1a76b04780fffc7578f56fed9ef527064f997e25807fd640fe5a0a145bd19ac] <==
	* {"level":"info","ts":"2023-12-05T20:03:50.710692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c switched to configuration voters=(12397538410003441052)"}
	{"level":"info","ts":"2023-12-05T20:03:50.712822Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","added-peer-id":"ac0ce77fb984259c","added-peer-peer-urls":["https://192.168.39.3:2380"]}
	{"level":"info","ts":"2023-12-05T20:03:50.716455Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-05T20:03:50.716657Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2023-12-05T20:03:50.71676Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2023-12-05T20:03:50.717767Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:03:50.717556Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ac0ce77fb984259c","initial-advertise-peer-urls":["https://192.168.39.3:2380"],"listen-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-05T20:03:50.953798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T20:03:50.953863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T20:03:50.953908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 1"}
	{"level":"info","ts":"2023-12-05T20:03:50.953921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:50.953936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgVoteResp from ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:50.953943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became leader at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:50.953951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac0ce77fb984259c elected leader ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2023-12-05T20:03:50.957111Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:50.961084Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ac0ce77fb984259c","local-member-attributes":"{Name:multinode-558947 ClientURLs:[https://192.168.39.3:2379]}","request-path":"/0/members/ac0ce77fb984259c/attributes","cluster-id":"1d030e9334923ef1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:03:50.961137Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:03:50.961841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:50.961951Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:50.961972Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:03:50.962368Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:03:50.968827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:03:50.969624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.3:2379"}
	{"level":"info","ts":"2023-12-05T20:03:50.971811Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:03:50.971853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:05:06 up 1 min,  0 users,  load average: 0.52, 0.27, 0.10
	Linux multinode-558947 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [6a8b06a7b92b5ab6dd8328ea51dc86a09c97b277daf318ad44e71a8a30522340] <==
	* I1205 20:04:13.332508       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1205 20:04:13.332676       1 main.go:107] hostIP = 192.168.39.3
	podIP = 192.168.39.3
	I1205 20:04:13.333065       1 main.go:116] setting mtu 1500 for CNI 
	I1205 20:04:13.333119       1 main.go:146] kindnetd IP family: "ipv4"
	I1205 20:04:13.333155       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1205 20:04:13.831030       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:04:13.831085       1 main.go:227] handling current node
	I1205 20:04:23.845036       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:04:23.845090       1 main.go:227] handling current node
	I1205 20:04:33.857445       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:04:33.857506       1 main.go:227] handling current node
	I1205 20:04:43.868393       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:04:43.868450       1 main.go:227] handling current node
	I1205 20:04:53.873577       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:04:53.873668       1 main.go:227] handling current node
	I1205 20:04:53.873694       1 main.go:223] Handling node with IPs: map[192.168.39.10:{}]
	I1205 20:04:53.873811       1 main.go:250] Node multinode-558947-m02 has CIDR [10.244.1.0/24] 
	I1205 20:04:53.874004       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.10 Flags: [] Table: 0} 
	I1205 20:05:03.882807       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:05:03.882988       1 main.go:227] handling current node
	I1205 20:05:03.883027       1 main.go:223] Handling node with IPs: map[192.168.39.10:{}]
	I1205 20:05:03.883059       1 main.go:250] Node multinode-558947-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f] <==
	* I1205 20:03:52.998041       1 controller.go:624] quota admission added evaluator for: namespaces
	I1205 20:03:52.998122       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 20:03:53.000671       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 20:03:53.000828       1 aggregator.go:166] initial CRD sync complete...
	I1205 20:03:53.000835       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 20:03:53.000839       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:03:53.000844       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:03:53.007214       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 20:03:53.007287       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1205 20:03:53.182248       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:03:53.802993       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 20:03:53.810791       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 20:03:53.811474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:03:54.523037       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:03:54.594643       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:03:54.746474       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1205 20:03:54.754789       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.3]
	I1205 20:03:54.755672       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 20:03:54.760888       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:03:54.888418       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 20:03:56.017897       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 20:03:56.039912       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 20:03:56.051812       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 20:04:08.883873       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1205 20:04:09.026215       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [260c7faea8fbc7583818af2f0e31c0f0a950041197977ea546f11078c219fbab] <==
	* I1205 20:04:09.342667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.024µs"
	I1205 20:04:14.240342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.24µs"
	I1205 20:04:14.282398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.706µs"
	I1205 20:04:17.364286       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="159.21µs"
	I1205 20:04:17.398071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.668063ms"
	I1205 20:04:17.398495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="159.749µs"
	I1205 20:04:18.150829       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1205 20:04:49.672901       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-558947-m02\" does not exist"
	I1205 20:04:49.684946       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-558947-m02" podCIDRs=["10.244.1.0/24"]
	I1205 20:04:49.706873       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xcs7j"
	I1205 20:04:49.709223       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kjph8"
	I1205 20:04:53.156039       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-558947-m02"
	I1205 20:04:53.156450       1 event.go:307] "Event occurred" object="multinode-558947-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-558947-m02 event: Registered Node multinode-558947-m02 in Controller"
	I1205 20:04:57.534788       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m02"
	I1205 20:04:59.749962       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1205 20:04:59.775934       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-phsxm"
	I1205 20:04:59.800615       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-6www8"
	I1205 20:04:59.823392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="74.141143ms"
	I1205 20:04:59.838783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.552582ms"
	I1205 20:04:59.870889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.997916ms"
	I1205 20:04:59.871142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.255µs"
	I1205 20:05:02.275425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.087828ms"
	I1205 20:05:02.275532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.963µs"
	I1205 20:05:02.523165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.302391ms"
	I1205 20:05:02.523950       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.516µs"
	
	* 
	* ==> kube-proxy [c285a46a1433ed81517bbf33e3e08f17066a54a18d3ecb925e5b190cbaa5892e] <==
	* I1205 20:04:10.122613       1 server_others.go:69] "Using iptables proxy"
	I1205 20:04:10.138492       1 node.go:141] Successfully retrieved node IP: 192.168.39.3
	I1205 20:04:10.191414       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:04:10.191465       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:04:10.195346       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:04:10.195416       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:04:10.195658       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:04:10.195697       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:04:10.196523       1 config.go:188] "Starting service config controller"
	I1205 20:04:10.196582       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:04:10.196613       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:04:10.196617       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:04:10.197117       1 config.go:315] "Starting node config controller"
	I1205 20:04:10.197155       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:04:10.296787       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:04:10.296794       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:04:10.297257       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e725dcc58dc42661e847c544f5f35c25cae5cb452bf900a732a743ad2a333254] <==
	* W1205 20:03:53.762415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:03:53.762486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:53.797572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:03:53.797673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 20:03:53.863072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:03:53.863164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:03:53.873032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:03:53.873117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 20:03:53.883554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:03:53.883605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 20:03:53.969902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:03:53.969959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 20:03:54.028950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:03:54.029005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 20:03:54.092557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:03:54.092644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 20:03:54.203638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:03:54.203809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:54.253039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:03:54.253129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 20:03:54.287853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:03:54.287908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 20:03:54.344191       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:03:54.344462       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1205 20:03:56.334505       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:03:22 UTC, ends at Tue 2023-12-05 20:05:06 UTC. --
	Dec 05 20:04:09 multinode-558947 kubelet[1265]: I1205 20:04:09.168108    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41275cfd-cb0f-4886-b1bc-a86b7e20cc14-lib-modules\") pod \"kube-proxy-mgmt2\" (UID: \"41275cfd-cb0f-4886-b1bc-a86b7e20cc14\") " pod="kube-system/kube-proxy-mgmt2"
	Dec 05 20:04:09 multinode-558947 kubelet[1265]: I1205 20:04:09.168126    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41275cfd-cb0f-4886-b1bc-a86b7e20cc14-xtables-lock\") pod \"kube-proxy-mgmt2\" (UID: \"41275cfd-cb0f-4886-b1bc-a86b7e20cc14\") " pod="kube-system/kube-proxy-mgmt2"
	Dec 05 20:04:09 multinode-558947 kubelet[1265]: I1205 20:04:09.168209    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjcj\" (UniqueName: \"kubernetes.io/projected/41275cfd-cb0f-4886-b1bc-a86b7e20cc14-kube-api-access-4jjcj\") pod \"kube-proxy-mgmt2\" (UID: \"41275cfd-cb0f-4886-b1bc-a86b7e20cc14\") " pod="kube-system/kube-proxy-mgmt2"
	Dec 05 20:04:13 multinode-558947 kubelet[1265]: I1205 20:04:13.342371    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cv76g" podStartSLOduration=4.342335025 podCreationTimestamp="2023-12-05 20:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:13.341880433 +0000 UTC m=+17.345764154" watchObservedRunningTime="2023-12-05 20:04:13.342335025 +0000 UTC m=+17.346218728"
	Dec 05 20:04:13 multinode-558947 kubelet[1265]: I1205 20:04:13.342625    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mgmt2" podStartSLOduration=4.342605469 podCreationTimestamp="2023-12-05 20:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:10.336676432 +0000 UTC m=+14.340560153" watchObservedRunningTime="2023-12-05 20:04:13.342605469 +0000 UTC m=+17.346489188"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.184615    1265 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.226340    1265 topology_manager.go:215] "Topology Admit Handler" podUID="58d4c242-7ea5-49f5-999c-3c9135144038" podNamespace="kube-system" podName="storage-provisioner"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.235075    1265 topology_manager.go:215] "Topology Admit Handler" podUID="28d6c367-593c-469a-90c6-b3c13cedc3df" podNamespace="kube-system" podName="coredns-5dd5756b68-knl4d"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: W1205 20:04:14.242392    1265 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-558947" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-558947' and this object
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: E1205 20:04:14.242450    1265 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-558947" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-558947' and this object
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.308271    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8j95\" (UniqueName: \"kubernetes.io/projected/58d4c242-7ea5-49f5-999c-3c9135144038-kube-api-access-v8j95\") pod \"storage-provisioner\" (UID: \"58d4c242-7ea5-49f5-999c-3c9135144038\") " pod="kube-system/storage-provisioner"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.308350    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d6c367-593c-469a-90c6-b3c13cedc3df-config-volume\") pod \"coredns-5dd5756b68-knl4d\" (UID: \"28d6c367-593c-469a-90c6-b3c13cedc3df\") " pod="kube-system/coredns-5dd5756b68-knl4d"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.308372    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8lh\" (UniqueName: \"kubernetes.io/projected/28d6c367-593c-469a-90c6-b3c13cedc3df-kube-api-access-rh8lh\") pod \"coredns-5dd5756b68-knl4d\" (UID: \"28d6c367-593c-469a-90c6-b3c13cedc3df\") " pod="kube-system/coredns-5dd5756b68-knl4d"
	Dec 05 20:04:14 multinode-558947 kubelet[1265]: I1205 20:04:14.308392    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58d4c242-7ea5-49f5-999c-3c9135144038-tmp\") pod \"storage-provisioner\" (UID: \"58d4c242-7ea5-49f5-999c-3c9135144038\") " pod="kube-system/storage-provisioner"
	Dec 05 20:04:15 multinode-558947 kubelet[1265]: E1205 20:04:15.410685    1265 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 05 20:04:15 multinode-558947 kubelet[1265]: E1205 20:04:15.410983    1265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28d6c367-593c-469a-90c6-b3c13cedc3df-config-volume podName:28d6c367-593c-469a-90c6-b3c13cedc3df nodeName:}" failed. No retries permitted until 2023-12-05 20:04:15.910902153 +0000 UTC m=+19.914785866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/28d6c367-593c-469a-90c6-b3c13cedc3df-config-volume") pod "coredns-5dd5756b68-knl4d" (UID: "28d6c367-593c-469a-90c6-b3c13cedc3df") : failed to sync configmap cache: timed out waiting for the condition
	Dec 05 20:04:16 multinode-558947 kubelet[1265]: I1205 20:04:16.282522    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.282474211 podCreationTimestamp="2023-12-05 20:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:15.352787224 +0000 UTC m=+19.356670930" watchObservedRunningTime="2023-12-05 20:04:16.282474211 +0000 UTC m=+20.286357927"
	Dec 05 20:04:17 multinode-558947 kubelet[1265]: I1205 20:04:17.381670    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-knl4d" podStartSLOduration=8.381631499 podCreationTimestamp="2023-12-05 20:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-05 20:04:17.362563707 +0000 UTC m=+21.366447428" watchObservedRunningTime="2023-12-05 20:04:17.381631499 +0000 UTC m=+21.385515217"
	Dec 05 20:04:56 multinode-558947 kubelet[1265]: E1205 20:04:56.323552    1265 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 20:04:56 multinode-558947 kubelet[1265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:04:56 multinode-558947 kubelet[1265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:04:56 multinode-558947 kubelet[1265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:04:59 multinode-558947 kubelet[1265]: I1205 20:04:59.813808    1265 topology_manager.go:215] "Topology Admit Handler" podUID="448efe43-2e13-4b86-9c87-090ece8e686e" podNamespace="default" podName="busybox-5bc68d56bd-6www8"
	Dec 05 20:04:59 multinode-558947 kubelet[1265]: I1205 20:04:59.870544    1265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf29r\" (UniqueName: \"kubernetes.io/projected/448efe43-2e13-4b86-9c87-090ece8e686e-kube-api-access-zf29r\") pod \"busybox-5bc68d56bd-6www8\" (UID: \"448efe43-2e13-4b86-9c87-090ece8e686e\") " pod="default/busybox-5bc68d56bd-6www8"
	Dec 05 20:05:02 multinode-558947 kubelet[1265]: I1205 20:05:02.514194    1265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-6www8" podStartSLOduration=2.601576992 podCreationTimestamp="2023-12-05 20:04:59 +0000 UTC" firstStartedPulling="2023-12-05 20:05:00.714306328 +0000 UTC m=+64.718190029" lastFinishedPulling="2023-12-05 20:05:01.626864409 +0000 UTC m=+65.630748114" observedRunningTime="2023-12-05 20:05:02.513353963 +0000 UTC m=+66.517237684" watchObservedRunningTime="2023-12-05 20:05:02.514135077 +0000 UTC m=+66.518018798"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-558947 -n multinode-558947
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-558947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (691.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-558947
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-558947
E1205 20:07:37.060187   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:07:46.654409   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-558947: exit status 82 (2m1.209845036s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-558947"  ...
	* Stopping node "multinode-558947"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-558947" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558947 --wait=true -v=8 --alsologtostderr
E1205 20:09:00.105542   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:10:16.959401   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:12:37.060310   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:12:46.652288   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:14:09.699538   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:15:16.959621   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:16:40.008858   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:17:37.060041   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:17:46.654602   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558947 --wait=true -v=8 --alsologtostderr: (9m26.972866417s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-558947
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-558947 -n multinode-558947
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-558947 logs -n 25: (1.55286846s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp multinode-558947-m02:/home/docker/cp-test.txt                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile280168625/001/cp-test_multinode-558947-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp multinode-558947-m02:/home/docker/cp-test.txt                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947:/home/docker/cp-test_multinode-558947-m02_multinode-558947.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n multinode-558947 sudo cat                                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | /home/docker/cp-test_multinode-558947-m02_multinode-558947.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp multinode-558947-m02:/home/docker/cp-test.txt                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m03:/home/docker/cp-test_multinode-558947-m02_multinode-558947-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n multinode-558947-m03 sudo cat                                   | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | /home/docker/cp-test_multinode-558947-m02_multinode-558947-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp testdata/cp-test.txt                                                | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp multinode-558947-m03:/home/docker/cp-test.txt                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile280168625/001/cp-test_multinode-558947-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp multinode-558947-m03:/home/docker/cp-test.txt                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947:/home/docker/cp-test_multinode-558947-m03_multinode-558947.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n multinode-558947 sudo cat                                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | /home/docker/cp-test_multinode-558947-m03_multinode-558947.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-558947 cp multinode-558947-m03:/home/docker/cp-test.txt                       | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m02:/home/docker/cp-test_multinode-558947-m03_multinode-558947-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n                                                                 | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | multinode-558947-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-558947 ssh -n multinode-558947-m02 sudo cat                                   | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:05 UTC |
	|         | /home/docker/cp-test_multinode-558947-m03_multinode-558947-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-558947 node stop m03                                                          | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:05 UTC | 05 Dec 23 20:06 UTC |
	| node    | multinode-558947 node start                                                             | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:06 UTC | 05 Dec 23 20:06 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-558947                                                                | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:06 UTC |                     |
	| stop    | -p multinode-558947                                                                     | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:06 UTC |                     |
	| start   | -p multinode-558947                                                                     | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:08 UTC | 05 Dec 23 20:17 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-558947                                                                | multinode-558947 | jenkins | v1.32.0 | 05 Dec 23 20:17 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:08:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:08:31.585772   30150 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:08:31.585891   30150 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:08:31.585895   30150 out.go:309] Setting ErrFile to fd 2...
	I1205 20:08:31.585900   30150 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:08:31.586085   30150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:08:31.586650   30150 out.go:303] Setting JSON to false
	I1205 20:08:31.587503   30150 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3065,"bootTime":1701803847,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:08:31.587559   30150 start.go:138] virtualization: kvm guest
	I1205 20:08:31.590210   30150 out.go:177] * [multinode-558947] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:08:31.591775   30150 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:08:31.591842   30150 notify.go:220] Checking for updates...
	I1205 20:08:31.594999   30150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:08:31.596676   30150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:08:31.598007   30150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:08:31.599452   30150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:08:31.600826   30150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:08:31.602766   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:08:31.602846   30150 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:08:31.603274   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:08:31.603328   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:08:31.618238   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I1205 20:08:31.618629   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:08:31.619223   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:08:31.619245   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:08:31.619582   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:08:31.619821   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:08:31.655004   30150 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:08:31.656407   30150 start.go:298] selected driver: kvm2
	I1205 20:08:31.656421   30150 start.go:902] validating driver "kvm2" against &{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:08:31.656567   30150 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:08:31.656911   30150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:08:31.656989   30150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:08:31.671338   30150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:08:31.671964   30150 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:08:31.672025   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:08:31.672036   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:08:31.672043   30150 start_flags.go:323] config:
	{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:08:31.672660   30150 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:08:31.674695   30150 out.go:177] * Starting control plane node multinode-558947 in cluster multinode-558947
	I1205 20:08:31.675922   30150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:08:31.675961   30150 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:08:31.675969   30150 cache.go:56] Caching tarball of preloaded images
	I1205 20:08:31.676037   30150 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:08:31.676046   30150 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:08:31.676163   30150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:08:31.676339   30150 start.go:365] acquiring machines lock for multinode-558947: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:08:31.676377   30150 start.go:369] acquired machines lock for "multinode-558947" in 21.395µs
	I1205 20:08:31.676387   30150 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:08:31.676392   30150 fix.go:54] fixHost starting: 
	I1205 20:08:31.676645   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:08:31.676668   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:08:31.690238   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
	I1205 20:08:31.690685   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:08:31.691124   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:08:31.691151   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:08:31.691445   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:08:31.691700   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:08:31.691845   30150 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:08:31.693505   30150 fix.go:102] recreateIfNeeded on multinode-558947: state=Running err=<nil>
	W1205 20:08:31.693545   30150 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:08:31.696704   30150 out.go:177] * Updating the running kvm2 "multinode-558947" VM ...
	I1205 20:08:31.698250   30150 machine.go:88] provisioning docker machine ...
	I1205 20:08:31.698291   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:08:31.698549   30150 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:08:31.698687   30150 buildroot.go:166] provisioning hostname "multinode-558947"
	I1205 20:08:31.698707   30150 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:08:31.698852   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:08:31.701102   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:08:31.701596   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:08:31.701627   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:08:31.701825   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:08:31.701976   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:08:31.702095   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:08:31.702194   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:08:31.702337   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:08:31.702722   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:08:31.702737   30150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-558947 && echo "multinode-558947" | sudo tee /etc/hostname
	I1205 20:08:50.258585   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:08:56.338670   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:08:59.410592   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:05.490611   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:08.562514   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:14.642523   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:17.714616   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:23.794608   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:26.866569   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:32.946588   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:36.018604   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:42.098497   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:45.170643   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:51.250582   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:09:54.322517   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:00.402572   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:03.474645   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:09.554577   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:12.626510   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:18.706533   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:21.778631   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:27.858507   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:30.930551   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:37.010525   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:40.082611   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:46.162549   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:49.234595   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:55.314562   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:10:58.386513   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:04.466604   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:07.538555   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:13.618564   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:16.690667   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:22.770569   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:25.842495   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:31.922511   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:34.994526   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:41.074547   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:44.146626   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:50.226624   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:53.298570   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:11:59.378538   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:02.450508   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:08.530585   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:11.602594   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:17.682576   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:20.754560   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:26.834554   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:29.906558   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:35.986569   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:39.058552   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:45.138538   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:48.210647   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:54.290562   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:12:57.362558   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:13:03.442582   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:13:06.514571   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:13:12.594559   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:13:15.666513   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:13:21.746504   30150 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.3:22: connect: no route to host
	I1205 20:13:24.748648   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:13:24.748695   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:24.750683   30150 machine.go:91] provisioned docker machine in 4m53.052414518s
	I1205 20:13:24.750729   30150 fix.go:56] fixHost completed within 4m53.074336307s
	I1205 20:13:24.750739   30150 start.go:83] releasing machines lock for "multinode-558947", held for 4m53.074355126s
	W1205 20:13:24.750761   30150 start.go:694] error starting host: provision: host is not running
	W1205 20:13:24.750849   30150 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:13:24.750861   30150 start.go:709] Will try again in 5 seconds ...
	I1205 20:13:29.753832   30150 start.go:365] acquiring machines lock for multinode-558947: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:13:29.753948   30150 start.go:369] acquired machines lock for "multinode-558947" in 74.499µs
	I1205 20:13:29.753974   30150 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:13:29.753989   30150 fix.go:54] fixHost starting: 
	I1205 20:13:29.754315   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:13:29.754343   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:13:29.768664   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I1205 20:13:29.769088   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:13:29.769675   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:13:29.769710   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:13:29.770023   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:13:29.770235   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:29.770451   30150 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:13:29.771976   30150 fix.go:102] recreateIfNeeded on multinode-558947: state=Stopped err=<nil>
	I1205 20:13:29.772001   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	W1205 20:13:29.772218   30150 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:13:29.774149   30150 out.go:177] * Restarting existing kvm2 VM for "multinode-558947" ...
	I1205 20:13:29.775547   30150 main.go:141] libmachine: (multinode-558947) Calling .Start
	I1205 20:13:29.775706   30150 main.go:141] libmachine: (multinode-558947) Ensuring networks are active...
	I1205 20:13:29.776366   30150 main.go:141] libmachine: (multinode-558947) Ensuring network default is active
	I1205 20:13:29.776641   30150 main.go:141] libmachine: (multinode-558947) Ensuring network mk-multinode-558947 is active
	I1205 20:13:29.776897   30150 main.go:141] libmachine: (multinode-558947) Getting domain xml...
	I1205 20:13:29.777593   30150 main.go:141] libmachine: (multinode-558947) Creating domain...
	I1205 20:13:31.007660   30150 main.go:141] libmachine: (multinode-558947) Waiting to get IP...
	I1205 20:13:31.008439   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:31.008741   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:31.008808   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:31.008722   30953 retry.go:31] will retry after 223.957823ms: waiting for machine to come up
	I1205 20:13:31.234158   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:31.234622   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:31.234645   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:31.234588   30953 retry.go:31] will retry after 318.549106ms: waiting for machine to come up
	I1205 20:13:31.555103   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:31.555523   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:31.555566   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:31.555505   30953 retry.go:31] will retry after 299.866597ms: waiting for machine to come up
	I1205 20:13:31.857123   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:31.857594   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:31.857616   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:31.857543   30953 retry.go:31] will retry after 495.744526ms: waiting for machine to come up
	I1205 20:13:32.355204   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:32.355657   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:32.355700   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:32.355616   30953 retry.go:31] will retry after 654.558259ms: waiting for machine to come up
	I1205 20:13:33.011501   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:33.011873   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:33.011900   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:33.011829   30953 retry.go:31] will retry after 639.875104ms: waiting for machine to come up
	I1205 20:13:33.653700   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:33.654045   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:33.654071   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:33.654000   30953 retry.go:31] will retry after 938.917673ms: waiting for machine to come up
	I1205 20:13:34.593939   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:34.594341   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:34.594363   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:34.594300   30953 retry.go:31] will retry after 1.376927047s: waiting for machine to come up
	I1205 20:13:35.973240   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:35.973727   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:35.973755   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:35.973688   30953 retry.go:31] will retry after 1.58635124s: waiting for machine to come up
	I1205 20:13:37.562363   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:37.562879   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:37.562905   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:37.562844   30953 retry.go:31] will retry after 2.293714318s: waiting for machine to come up
	I1205 20:13:39.857861   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:39.858460   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:39.858510   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:39.858421   30953 retry.go:31] will retry after 2.010929967s: waiting for machine to come up
	I1205 20:13:41.871329   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:41.871719   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:41.871735   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:41.871687   30953 retry.go:31] will retry after 3.261475281s: waiting for machine to come up
	I1205 20:13:45.134533   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:45.134926   30150 main.go:141] libmachine: (multinode-558947) DBG | unable to find current IP address of domain multinode-558947 in network mk-multinode-558947
	I1205 20:13:45.134963   30150 main.go:141] libmachine: (multinode-558947) DBG | I1205 20:13:45.134898   30953 retry.go:31] will retry after 2.756662316s: waiting for machine to come up
	I1205 20:13:47.894980   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.895424   30150 main.go:141] libmachine: (multinode-558947) Found IP for machine: 192.168.39.3
	I1205 20:13:47.895449   30150 main.go:141] libmachine: (multinode-558947) Reserving static IP address...
	I1205 20:13:47.895469   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has current primary IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.895841   30150 main.go:141] libmachine: (multinode-558947) Reserved static IP address: 192.168.39.3
	I1205 20:13:47.895865   30150 main.go:141] libmachine: (multinode-558947) Waiting for SSH to be available...
	I1205 20:13:47.895886   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "multinode-558947", mac: "52:54:00:ca:d0:61", ip: "192.168.39.3"} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:47.895925   30150 main.go:141] libmachine: (multinode-558947) DBG | skip adding static IP to network mk-multinode-558947 - found existing host DHCP lease matching {name: "multinode-558947", mac: "52:54:00:ca:d0:61", ip: "192.168.39.3"}
	I1205 20:13:47.895941   30150 main.go:141] libmachine: (multinode-558947) DBG | Getting to WaitForSSH function...
	I1205 20:13:47.897908   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.898193   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:47.898234   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.898340   30150 main.go:141] libmachine: (multinode-558947) DBG | Using SSH client type: external
	I1205 20:13:47.898363   30150 main.go:141] libmachine: (multinode-558947) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa (-rw-------)
	I1205 20:13:47.898385   30150 main.go:141] libmachine: (multinode-558947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:13:47.898401   30150 main.go:141] libmachine: (multinode-558947) DBG | About to run SSH command:
	I1205 20:13:47.898416   30150 main.go:141] libmachine: (multinode-558947) DBG | exit 0
	I1205 20:13:47.990092   30150 main.go:141] libmachine: (multinode-558947) DBG | SSH cmd err, output: <nil>: 
	I1205 20:13:47.990463   30150 main.go:141] libmachine: (multinode-558947) Calling .GetConfigRaw
	I1205 20:13:47.991116   30150 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:13:47.993665   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.994068   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:47.994099   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.994419   30150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:13:47.994655   30150 machine.go:88] provisioning docker machine ...
	I1205 20:13:47.994679   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:47.994871   30150 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:13:47.995055   30150 buildroot.go:166] provisioning hostname "multinode-558947"
	I1205 20:13:47.995088   30150 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:13:47.995202   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:47.997583   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.997932   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:47.997966   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:47.998055   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:47.998224   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:47.998383   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:47.998525   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:47.998676   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:13:47.998999   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:13:47.999014   30150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-558947 && echo "multinode-558947" | sudo tee /etc/hostname
	I1205 20:13:48.135729   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-558947
	
	I1205 20:13:48.135758   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:48.138422   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.138849   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:48.138895   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.139109   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:48.139342   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:48.139501   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:48.139663   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:48.139825   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:13:48.140126   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:13:48.140144   30150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-558947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-558947/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-558947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:13:48.277134   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:13:48.277165   30150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:13:48.277188   30150 buildroot.go:174] setting up certificates
	I1205 20:13:48.277198   30150 provision.go:83] configureAuth start
	I1205 20:13:48.277207   30150 main.go:141] libmachine: (multinode-558947) Calling .GetMachineName
	I1205 20:13:48.277478   30150 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:13:48.279931   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.280261   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:48.280294   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.280422   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:48.282671   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.282997   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:48.283020   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.283150   30150 provision.go:138] copyHostCerts
	I1205 20:13:48.283177   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:13:48.283215   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:13:48.283254   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:13:48.283338   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:13:48.283435   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:13:48.283461   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:13:48.283472   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:13:48.283508   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:13:48.283570   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:13:48.283593   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:13:48.283602   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:13:48.283635   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:13:48.283740   30150 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.multinode-558947 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube multinode-558947]
	I1205 20:13:48.460884   30150 provision.go:172] copyRemoteCerts
	I1205 20:13:48.460944   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:13:48.460974   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:48.463595   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.463941   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:48.463974   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.464120   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:48.464287   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:48.464420   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:48.464539   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:13:48.555352   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:13:48.555429   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:13:48.579142   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:13:48.579209   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 20:13:48.601950   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:13:48.602027   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:13:48.624112   30150 provision.go:86] duration metric: configureAuth took 346.900955ms
	I1205 20:13:48.624144   30150 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:13:48.624368   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:13:48.624430   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:48.627057   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.627398   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:48.627432   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.627587   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:48.627761   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:48.627921   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:48.628098   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:48.628254   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:13:48.628680   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:13:48.628706   30150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:13:48.965194   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:13:48.965229   30150 machine.go:91] provisioned docker machine in 970.555333ms
	I1205 20:13:48.965239   30150 start.go:300] post-start starting for "multinode-558947" (driver="kvm2")
	I1205 20:13:48.965251   30150 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:13:48.965271   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:48.965598   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:13:48.965631   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:48.968219   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.968573   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:48.968610   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:48.968720   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:48.968899   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:48.969052   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:48.969182   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:13:49.064494   30150 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:13:49.068667   30150 command_runner.go:130] > NAME=Buildroot
	I1205 20:13:49.068690   30150 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1205 20:13:49.068694   30150 command_runner.go:130] > ID=buildroot
	I1205 20:13:49.068700   30150 command_runner.go:130] > VERSION_ID=2021.02.12
	I1205 20:13:49.068705   30150 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1205 20:13:49.068766   30150 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:13:49.068795   30150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:13:49.068866   30150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:13:49.068937   30150 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:13:49.068947   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /etc/ssl/certs/134102.pem
	I1205 20:13:49.069023   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:13:49.077878   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:13:49.101246   30150 start.go:303] post-start completed in 135.994226ms
	I1205 20:13:49.101271   30150 fix.go:56] fixHost completed within 19.347286593s
	I1205 20:13:49.101291   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:49.103809   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.104125   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:49.104155   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.104283   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:49.104475   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:49.104663   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:49.104840   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:49.105028   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:13:49.105394   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I1205 20:13:49.105407   30150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:13:49.231174   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701807229.180063862
	
	I1205 20:13:49.231195   30150 fix.go:206] guest clock: 1701807229.180063862
	I1205 20:13:49.231204   30150 fix.go:219] Guest: 2023-12-05 20:13:49.180063862 +0000 UTC Remote: 2023-12-05 20:13:49.101275086 +0000 UTC m=+317.563833887 (delta=78.788776ms)
	I1205 20:13:49.231228   30150 fix.go:190] guest clock delta is within tolerance: 78.788776ms
	I1205 20:13:49.231234   30150 start.go:83] releasing machines lock for "multinode-558947", held for 19.477276286s
	I1205 20:13:49.231259   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:49.231488   30150 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:13:49.233844   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.234161   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:49.234190   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.234345   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:49.234804   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:49.234959   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:13:49.235044   30150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:13:49.235081   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:49.235141   30150 ssh_runner.go:195] Run: cat /version.json
	I1205 20:13:49.235170   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:13:49.237568   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.237876   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.237910   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:49.237934   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.238041   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:49.238199   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:49.238260   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:49.238294   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:49.238400   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:49.238493   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:13:49.238578   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:13:49.238652   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:13:49.238846   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:13:49.238977   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:13:49.358741   30150 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:13:49.359504   30150 command_runner.go:130] > {"iso_version": "v1.32.1-1701387192-17703", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "196015715c4eb12e436d5bb69e555ba604cda88e"}
	I1205 20:13:49.359685   30150 ssh_runner.go:195] Run: systemctl --version
	I1205 20:13:49.365271   30150 command_runner.go:130] > systemd 247 (247)
	I1205 20:13:49.365301   30150 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1205 20:13:49.365359   30150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:13:49.508389   30150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:13:49.514063   30150 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:13:49.514164   30150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:13:49.514235   30150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:13:49.529141   30150 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1205 20:13:49.529357   30150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:13:49.529379   30150 start.go:475] detecting cgroup driver to use...
	I1205 20:13:49.529451   30150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:13:49.545457   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:13:49.557151   30150 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:13:49.557197   30150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:13:49.569083   30150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:13:49.581250   30150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:13:49.681687   30150 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1205 20:13:49.681882   30150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:13:49.696497   30150 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1205 20:13:49.800996   30150 docker.go:219] disabling docker service ...
	I1205 20:13:49.801059   30150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:13:49.814720   30150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:13:49.826814   30150 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1205 20:13:49.826908   30150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:13:49.841211   30150 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1205 20:13:49.935252   30150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:13:50.042854   30150 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1205 20:13:50.042883   30150 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1205 20:13:50.042951   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:13:50.056647   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:13:50.073275   30150 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:13:50.073499   30150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:13:50.073563   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:13:50.083791   30150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:13:50.083885   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:13:50.093897   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:13:50.103952   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:13:50.114102   30150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:13:50.124935   30150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:13:50.134127   30150 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:13:50.134173   30150 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:13:50.134227   30150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:13:50.148265   30150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:13:50.157726   30150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:13:50.263409   30150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:13:50.428790   30150 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:13:50.428863   30150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:13:50.434263   30150 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:13:50.434304   30150 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:13:50.434320   30150 command_runner.go:130] > Device: 16h/22d	Inode: 730         Links: 1
	I1205 20:13:50.434328   30150 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:13:50.434333   30150 command_runner.go:130] > Access: 2023-12-05 20:13:50.364892622 +0000
	I1205 20:13:50.434339   30150 command_runner.go:130] > Modify: 2023-12-05 20:13:50.364892622 +0000
	I1205 20:13:50.434344   30150 command_runner.go:130] > Change: 2023-12-05 20:13:50.364892622 +0000
	I1205 20:13:50.434348   30150 command_runner.go:130] >  Birth: -
	I1205 20:13:50.434477   30150 start.go:543] Will wait 60s for crictl version
	I1205 20:13:50.434531   30150 ssh_runner.go:195] Run: which crictl
	I1205 20:13:50.437897   30150 command_runner.go:130] > /usr/bin/crictl
	I1205 20:13:50.438155   30150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:13:50.474413   30150 command_runner.go:130] > Version:  0.1.0
	I1205 20:13:50.474440   30150 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:13:50.474447   30150 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1205 20:13:50.474456   30150 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:13:50.474475   30150 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:13:50.474545   30150 ssh_runner.go:195] Run: crio --version
	I1205 20:13:50.517536   30150 command_runner.go:130] > crio version 1.24.1
	I1205 20:13:50.517564   30150 command_runner.go:130] > Version:          1.24.1
	I1205 20:13:50.517575   30150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:13:50.517594   30150 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:13:50.517604   30150 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:13:50.517611   30150 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:13:50.517618   30150 command_runner.go:130] > Compiler:         gc
	I1205 20:13:50.517625   30150 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:13:50.517638   30150 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:13:50.517649   30150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:13:50.517666   30150 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:13:50.517675   30150 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:13:50.519027   30150 ssh_runner.go:195] Run: crio --version
	I1205 20:13:50.568990   30150 command_runner.go:130] > crio version 1.24.1
	I1205 20:13:50.569018   30150 command_runner.go:130] > Version:          1.24.1
	I1205 20:13:50.569026   30150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:13:50.569030   30150 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:13:50.569036   30150 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:13:50.569041   30150 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:13:50.569045   30150 command_runner.go:130] > Compiler:         gc
	I1205 20:13:50.569049   30150 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:13:50.569054   30150 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:13:50.569064   30150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:13:50.569073   30150 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:13:50.569085   30150 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:13:50.573575   30150 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:13:50.574851   30150 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:13:50.577570   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:50.577979   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:13:50.578012   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:13:50.578209   30150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:13:50.582446   30150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:13:50.594405   30150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:13:50.594476   30150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:13:50.632292   30150 command_runner.go:130] > {
	I1205 20:13:50.632308   30150 command_runner.go:130] >   "images": [
	I1205 20:13:50.632313   30150 command_runner.go:130] >     {
	I1205 20:13:50.632320   30150 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1205 20:13:50.632325   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:50.632333   30150 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 20:13:50.632338   30150 command_runner.go:130] >       ],
	I1205 20:13:50.632345   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:50.632362   30150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1205 20:13:50.632373   30150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1205 20:13:50.632382   30150 command_runner.go:130] >       ],
	I1205 20:13:50.632398   30150 command_runner.go:130] >       "size": "750414",
	I1205 20:13:50.632405   30150 command_runner.go:130] >       "uid": {
	I1205 20:13:50.632409   30150 command_runner.go:130] >         "value": "65535"
	I1205 20:13:50.632414   30150 command_runner.go:130] >       },
	I1205 20:13:50.632418   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:50.632426   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:50.632430   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:50.632434   30150 command_runner.go:130] >     }
	I1205 20:13:50.632440   30150 command_runner.go:130] >   ]
	I1205 20:13:50.632448   30150 command_runner.go:130] > }
	I1205 20:13:50.632547   30150 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:13:50.632611   30150 ssh_runner.go:195] Run: which lz4
	I1205 20:13:50.636459   30150 command_runner.go:130] > /usr/bin/lz4
	I1205 20:13:50.636494   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 20:13:50.636571   30150 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:13:50.641038   30150 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:13:50.641109   30150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:13:50.641141   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:13:52.446971   30150 crio.go:444] Took 1.810428 seconds to copy over tarball
	I1205 20:13:52.447035   30150 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:13:55.300046   30150 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.852982214s)
	I1205 20:13:55.300084   30150 crio.go:451] Took 2.853088 seconds to extract the tarball
	I1205 20:13:55.300095   30150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:13:55.341554   30150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:13:55.392423   30150 command_runner.go:130] > {
	I1205 20:13:55.392447   30150 command_runner.go:130] >   "images": [
	I1205 20:13:55.392454   30150 command_runner.go:130] >     {
	I1205 20:13:55.392466   30150 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1205 20:13:55.392472   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.392485   30150 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1205 20:13:55.392491   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392497   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.392512   30150 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1205 20:13:55.392527   30150 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1205 20:13:55.392535   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392543   30150 command_runner.go:130] >       "size": "65258016",
	I1205 20:13:55.392551   30150 command_runner.go:130] >       "uid": null,
	I1205 20:13:55.392555   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.392566   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.392572   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.392577   30150 command_runner.go:130] >     },
	I1205 20:13:55.392586   30150 command_runner.go:130] >     {
	I1205 20:13:55.392610   30150 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 20:13:55.392621   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.392631   30150 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 20:13:55.392637   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392642   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.392656   30150 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 20:13:55.392667   30150 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 20:13:55.392674   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392683   30150 command_runner.go:130] >       "size": "31470524",
	I1205 20:13:55.392693   30150 command_runner.go:130] >       "uid": null,
	I1205 20:13:55.392704   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.392714   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.392723   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.392746   30150 command_runner.go:130] >     },
	I1205 20:13:55.392760   30150 command_runner.go:130] >     {
	I1205 20:13:55.392769   30150 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1205 20:13:55.392780   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.392793   30150 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1205 20:13:55.392802   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392812   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.392827   30150 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1205 20:13:55.392839   30150 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1205 20:13:55.392845   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392852   30150 command_runner.go:130] >       "size": "53621675",
	I1205 20:13:55.392862   30150 command_runner.go:130] >       "uid": null,
	I1205 20:13:55.392873   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.392883   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.392893   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.392902   30150 command_runner.go:130] >     },
	I1205 20:13:55.392911   30150 command_runner.go:130] >     {
	I1205 20:13:55.392924   30150 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1205 20:13:55.392932   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.392937   30150 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1205 20:13:55.392946   30150 command_runner.go:130] >       ],
	I1205 20:13:55.392957   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.392972   30150 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1205 20:13:55.392989   30150 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1205 20:13:55.393006   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393018   30150 command_runner.go:130] >       "size": "295456551",
	I1205 20:13:55.393027   30150 command_runner.go:130] >       "uid": {
	I1205 20:13:55.393037   30150 command_runner.go:130] >         "value": "0"
	I1205 20:13:55.393049   30150 command_runner.go:130] >       },
	I1205 20:13:55.393059   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.393068   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.393077   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.393087   30150 command_runner.go:130] >     },
	I1205 20:13:55.393095   30150 command_runner.go:130] >     {
	I1205 20:13:55.393106   30150 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1205 20:13:55.393116   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.393126   30150 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1205 20:13:55.393136   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393146   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.393161   30150 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1205 20:13:55.393176   30150 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1205 20:13:55.393185   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393192   30150 command_runner.go:130] >       "size": "127226832",
	I1205 20:13:55.393196   30150 command_runner.go:130] >       "uid": {
	I1205 20:13:55.393205   30150 command_runner.go:130] >         "value": "0"
	I1205 20:13:55.393209   30150 command_runner.go:130] >       },
	I1205 20:13:55.393216   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.393223   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.393227   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.393233   30150 command_runner.go:130] >     },
	I1205 20:13:55.393237   30150 command_runner.go:130] >     {
	I1205 20:13:55.393245   30150 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1205 20:13:55.393251   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.393257   30150 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1205 20:13:55.393263   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393267   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.393280   30150 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1205 20:13:55.393288   30150 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1205 20:13:55.393294   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393298   30150 command_runner.go:130] >       "size": "123261750",
	I1205 20:13:55.393302   30150 command_runner.go:130] >       "uid": {
	I1205 20:13:55.393309   30150 command_runner.go:130] >         "value": "0"
	I1205 20:13:55.393313   30150 command_runner.go:130] >       },
	I1205 20:13:55.393319   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.393326   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.393332   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.393336   30150 command_runner.go:130] >     },
	I1205 20:13:55.393342   30150 command_runner.go:130] >     {
	I1205 20:13:55.393348   30150 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1205 20:13:55.393355   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.393360   30150 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1205 20:13:55.393366   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393371   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.393381   30150 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1205 20:13:55.393390   30150 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1205 20:13:55.393394   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393401   30150 command_runner.go:130] >       "size": "74749335",
	I1205 20:13:55.393406   30150 command_runner.go:130] >       "uid": null,
	I1205 20:13:55.393412   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.393416   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.393423   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.393426   30150 command_runner.go:130] >     },
	I1205 20:13:55.393436   30150 command_runner.go:130] >     {
	I1205 20:13:55.393445   30150 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1205 20:13:55.393451   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.393457   30150 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1205 20:13:55.393463   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393467   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.393493   30150 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1205 20:13:55.393503   30150 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1205 20:13:55.393508   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393513   30150 command_runner.go:130] >       "size": "61551410",
	I1205 20:13:55.393519   30150 command_runner.go:130] >       "uid": {
	I1205 20:13:55.393523   30150 command_runner.go:130] >         "value": "0"
	I1205 20:13:55.393528   30150 command_runner.go:130] >       },
	I1205 20:13:55.393533   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.393540   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.393544   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.393550   30150 command_runner.go:130] >     },
	I1205 20:13:55.393554   30150 command_runner.go:130] >     {
	I1205 20:13:55.393564   30150 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1205 20:13:55.393569   30150 command_runner.go:130] >       "repoTags": [
	I1205 20:13:55.393576   30150 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1205 20:13:55.393579   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393591   30150 command_runner.go:130] >       "repoDigests": [
	I1205 20:13:55.393600   30150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1205 20:13:55.393609   30150 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1205 20:13:55.393615   30150 command_runner.go:130] >       ],
	I1205 20:13:55.393620   30150 command_runner.go:130] >       "size": "750414",
	I1205 20:13:55.393625   30150 command_runner.go:130] >       "uid": {
	I1205 20:13:55.393630   30150 command_runner.go:130] >         "value": "65535"
	I1205 20:13:55.393636   30150 command_runner.go:130] >       },
	I1205 20:13:55.393640   30150 command_runner.go:130] >       "username": "",
	I1205 20:13:55.393647   30150 command_runner.go:130] >       "spec": null,
	I1205 20:13:55.393651   30150 command_runner.go:130] >       "pinned": false
	I1205 20:13:55.393654   30150 command_runner.go:130] >     }
	I1205 20:13:55.393660   30150 command_runner.go:130] >   ]
	I1205 20:13:55.393664   30150 command_runner.go:130] > }
	I1205 20:13:55.393771   30150 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:13:55.393781   30150 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:13:55.393835   30150 ssh_runner.go:195] Run: crio config
	I1205 20:13:55.445945   30150 command_runner.go:130] ! time="2023-12-05 20:13:55.394420956Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1205 20:13:55.446007   30150 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:13:55.455746   30150 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:13:55.455777   30150 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:13:55.455788   30150 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:13:55.455794   30150 command_runner.go:130] > #
	I1205 20:13:55.455809   30150 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:13:55.455822   30150 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:13:55.455835   30150 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:13:55.455849   30150 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:13:55.455858   30150 command_runner.go:130] > # reload'.
	I1205 20:13:55.455870   30150 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:13:55.455883   30150 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:13:55.455896   30150 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:13:55.455908   30150 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:13:55.455914   30150 command_runner.go:130] > [crio]
	I1205 20:13:55.455924   30150 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:13:55.455935   30150 command_runner.go:130] > # containers images, in this directory.
	I1205 20:13:55.455946   30150 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:13:55.455964   30150 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:13:55.455977   30150 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:13:55.455990   30150 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:13:55.456003   30150 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:13:55.456011   30150 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:13:55.456023   30150 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:13:55.456032   30150 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:13:55.456041   30150 command_runner.go:130] > storage_option = [
	I1205 20:13:55.456049   30150 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:13:55.456058   30150 command_runner.go:130] > ]
	I1205 20:13:55.456070   30150 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:13:55.456084   30150 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:13:55.456094   30150 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:13:55.456103   30150 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:13:55.456132   30150 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:13:55.456140   30150 command_runner.go:130] > # always happen on a node reboot
	I1205 20:13:55.456146   30150 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:13:55.456154   30150 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:13:55.456160   30150 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:13:55.456177   30150 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:13:55.456184   30150 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:13:55.456192   30150 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:13:55.456202   30150 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:13:55.456208   30150 command_runner.go:130] > # internal_wipe = true
	I1205 20:13:55.456214   30150 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:13:55.456222   30150 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:13:55.456228   30150 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:13:55.456235   30150 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:13:55.456241   30150 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:13:55.456247   30150 command_runner.go:130] > [crio.api]
	I1205 20:13:55.456253   30150 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:13:55.456260   30150 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:13:55.456266   30150 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:13:55.456273   30150 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:13:55.456279   30150 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:13:55.456285   30150 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:13:55.456290   30150 command_runner.go:130] > # stream_port = "0"
	I1205 20:13:55.456297   30150 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:13:55.456304   30150 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:13:55.456311   30150 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:13:55.456318   30150 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:13:55.456323   30150 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:13:55.456331   30150 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:13:55.456335   30150 command_runner.go:130] > # minutes.
	I1205 20:13:55.456339   30150 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:13:55.456348   30150 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:13:55.456354   30150 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:13:55.456363   30150 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:13:55.456369   30150 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:13:55.456377   30150 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:13:55.456383   30150 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:13:55.456389   30150 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:13:55.456396   30150 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:13:55.456403   30150 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:13:55.456410   30150 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:13:55.456419   30150 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:13:55.456442   30150 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:13:55.456450   30150 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:13:55.456455   30150 command_runner.go:130] > [crio.runtime]
	I1205 20:13:55.456463   30150 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:13:55.456468   30150 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:13:55.456475   30150 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:13:55.456480   30150 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:13:55.456487   30150 command_runner.go:130] > # default_ulimits = [
	I1205 20:13:55.456492   30150 command_runner.go:130] > # ]
	I1205 20:13:55.456502   30150 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:13:55.456506   30150 command_runner.go:130] > # no_pivot = false
	I1205 20:13:55.456514   30150 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:13:55.456520   30150 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:13:55.456527   30150 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:13:55.456533   30150 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:13:55.456540   30150 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:13:55.456547   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:13:55.456556   30150 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:13:55.456560   30150 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:13:55.456568   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:13:55.456575   30150 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:13:55.456581   30150 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:13:55.456589   30150 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:13:55.456595   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:13:55.456604   30150 command_runner.go:130] > conmon_env = [
	I1205 20:13:55.456610   30150 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:13:55.456616   30150 command_runner.go:130] > ]
	I1205 20:13:55.456621   30150 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:13:55.456628   30150 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:13:55.456634   30150 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:13:55.456640   30150 command_runner.go:130] > # default_env = [
	I1205 20:13:55.456643   30150 command_runner.go:130] > # ]
	I1205 20:13:55.456649   30150 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:13:55.456655   30150 command_runner.go:130] > # selinux = false
	I1205 20:13:55.456661   30150 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:13:55.456675   30150 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:13:55.456683   30150 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:13:55.456687   30150 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:13:55.456694   30150 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:13:55.456699   30150 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:13:55.456708   30150 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:13:55.456712   30150 command_runner.go:130] > # which might increase security.
	I1205 20:13:55.456718   30150 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:13:55.456724   30150 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:13:55.456732   30150 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:13:55.456738   30150 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:13:55.456747   30150 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:13:55.456752   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:13:55.456758   30150 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:13:55.456764   30150 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:13:55.456771   30150 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:13:55.456776   30150 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:13:55.456784   30150 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:13:55.456791   30150 command_runner.go:130] > # irqbalance daemon.
	I1205 20:13:55.456798   30150 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:13:55.456804   30150 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:13:55.456812   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:13:55.456816   30150 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:13:55.456822   30150 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:13:55.456826   30150 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:13:55.456836   30150 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:13:55.456841   30150 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:13:55.456849   30150 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:13:55.456855   30150 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:13:55.456861   30150 command_runner.go:130] > # will be added.
	I1205 20:13:55.456866   30150 command_runner.go:130] > # default_capabilities = [
	I1205 20:13:55.456872   30150 command_runner.go:130] > # 	"CHOWN",
	I1205 20:13:55.456876   30150 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:13:55.456882   30150 command_runner.go:130] > # 	"FSETID",
	I1205 20:13:55.456885   30150 command_runner.go:130] > # 	"FOWNER",
	I1205 20:13:55.456891   30150 command_runner.go:130] > # 	"SETGID",
	I1205 20:13:55.456898   30150 command_runner.go:130] > # 	"SETUID",
	I1205 20:13:55.456904   30150 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:13:55.456908   30150 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:13:55.456911   30150 command_runner.go:130] > # 	"KILL",
	I1205 20:13:55.456915   30150 command_runner.go:130] > # ]
	I1205 20:13:55.456921   30150 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:13:55.456929   30150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:13:55.456933   30150 command_runner.go:130] > # default_sysctls = [
	I1205 20:13:55.456939   30150 command_runner.go:130] > # ]
	I1205 20:13:55.456943   30150 command_runner.go:130] > # List of devices on the host that a
	I1205 20:13:55.456952   30150 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:13:55.456956   30150 command_runner.go:130] > # allowed_devices = [
	I1205 20:13:55.456962   30150 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:13:55.456966   30150 command_runner.go:130] > # ]
	I1205 20:13:55.456970   30150 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:13:55.456978   30150 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:13:55.456985   30150 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:13:55.457038   30150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:13:55.457057   30150 command_runner.go:130] > # additional_devices = [
	I1205 20:13:55.457062   30150 command_runner.go:130] > # ]
	I1205 20:13:55.457071   30150 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:13:55.457081   30150 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:13:55.457088   30150 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:13:55.457095   30150 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:13:55.457103   30150 command_runner.go:130] > # ]
	I1205 20:13:55.457119   30150 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:13:55.457131   30150 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:13:55.457138   30150 command_runner.go:130] > # Defaults to false.
	I1205 20:13:55.457143   30150 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:13:55.457150   30150 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:13:55.457158   30150 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:13:55.457162   30150 command_runner.go:130] > # hooks_dir = [
	I1205 20:13:55.457170   30150 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:13:55.457173   30150 command_runner.go:130] > # ]
	I1205 20:13:55.457182   30150 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:13:55.457188   30150 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:13:55.457199   30150 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:13:55.457205   30150 command_runner.go:130] > #
	I1205 20:13:55.457212   30150 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:13:55.457221   30150 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:13:55.457226   30150 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:13:55.457230   30150 command_runner.go:130] > #
	I1205 20:13:55.457236   30150 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:13:55.457245   30150 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:13:55.457251   30150 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:13:55.457259   30150 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:13:55.457262   30150 command_runner.go:130] > #
	I1205 20:13:55.457271   30150 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:13:55.457276   30150 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:13:55.457283   30150 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:13:55.457289   30150 command_runner.go:130] > pids_limit = 1024
	I1205 20:13:55.457295   30150 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:13:55.457304   30150 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:13:55.457310   30150 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:13:55.457319   30150 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:13:55.457326   30150 command_runner.go:130] > # log_size_max = -1
	I1205 20:13:55.457333   30150 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:13:55.457339   30150 command_runner.go:130] > # log_to_journald = false
	I1205 20:13:55.457345   30150 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:13:55.457353   30150 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:13:55.457358   30150 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:13:55.457366   30150 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:13:55.457371   30150 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:13:55.457377   30150 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:13:55.457383   30150 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:13:55.457393   30150 command_runner.go:130] > # read_only = false
	I1205 20:13:55.457399   30150 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:13:55.457408   30150 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:13:55.457412   30150 command_runner.go:130] > # live configuration reload.
	I1205 20:13:55.457419   30150 command_runner.go:130] > # log_level = "info"
	I1205 20:13:55.457425   30150 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:13:55.457432   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:13:55.457439   30150 command_runner.go:130] > # log_filter = ""
	I1205 20:13:55.457447   30150 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:13:55.457454   30150 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:13:55.457460   30150 command_runner.go:130] > # separated by comma.
	I1205 20:13:55.457464   30150 command_runner.go:130] > # uid_mappings = ""
	I1205 20:13:55.457472   30150 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:13:55.457478   30150 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:13:55.457484   30150 command_runner.go:130] > # separated by comma.
	I1205 20:13:55.457488   30150 command_runner.go:130] > # gid_mappings = ""
	I1205 20:13:55.457497   30150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:13:55.457503   30150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:13:55.457511   30150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:13:55.457515   30150 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:13:55.457522   30150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:13:55.457528   30150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:13:55.457536   30150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:13:55.457541   30150 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:13:55.457549   30150 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:13:55.457557   30150 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:13:55.457565   30150 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:13:55.457569   30150 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:13:55.457575   30150 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:13:55.457585   30150 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:13:55.457591   30150 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:13:55.457597   30150 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:13:55.457610   30150 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:13:55.457618   30150 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:13:55.457624   30150 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:13:55.457632   30150 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:13:55.457638   30150 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:13:55.457644   30150 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:13:55.457651   30150 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:13:55.457656   30150 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:13:55.457662   30150 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:13:55.457669   30150 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:13:55.457677   30150 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:13:55.457688   30150 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:13:55.457697   30150 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:13:55.457701   30150 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:13:55.457709   30150 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:13:55.457716   30150 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:13:55.457727   30150 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:13:55.457733   30150 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:13:55.457741   30150 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:13:55.457749   30150 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:13:55.457753   30150 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:13:55.457759   30150 command_runner.go:130] > # ]
	I1205 20:13:55.457765   30150 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:13:55.457772   30150 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:13:55.457780   30150 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:13:55.457789   30150 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:13:55.457792   30150 command_runner.go:130] > #
	I1205 20:13:55.457799   30150 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:13:55.457804   30150 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:13:55.457811   30150 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:13:55.457819   30150 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:13:55.457824   30150 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:13:55.457828   30150 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:13:55.457831   30150 command_runner.go:130] > # Where:
	I1205 20:13:55.457837   30150 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:13:55.457845   30150 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:13:55.457852   30150 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:13:55.457860   30150 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:13:55.457864   30150 command_runner.go:130] > #   in $PATH.
	I1205 20:13:55.457871   30150 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:13:55.457875   30150 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:13:55.457883   30150 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:13:55.457888   30150 command_runner.go:130] > #   state.
	I1205 20:13:55.457896   30150 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:13:55.457902   30150 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:13:55.457910   30150 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:13:55.457917   30150 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:13:55.457928   30150 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:13:55.457935   30150 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:13:55.457942   30150 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:13:55.457948   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:13:55.457957   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:13:55.457963   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:13:55.457971   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:13:55.457978   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:13:55.457986   30150 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:13:55.457993   30150 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:13:55.458006   30150 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:13:55.458015   30150 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:13:55.458023   30150 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:13:55.458033   30150 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:13:55.458043   30150 command_runner.go:130] > runtime_type = "oci"
	I1205 20:13:55.458055   30150 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:13:55.458062   30150 command_runner.go:130] > runtime_config_path = ""
	I1205 20:13:55.458071   30150 command_runner.go:130] > monitor_path = ""
	I1205 20:13:55.458084   30150 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:13:55.458094   30150 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:13:55.458103   30150 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:13:55.458117   30150 command_runner.go:130] > # running containers
	I1205 20:13:55.458127   30150 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:13:55.458136   30150 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:13:55.458186   30150 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:13:55.458198   30150 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:13:55.458203   30150 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:13:55.458207   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:13:55.458212   30150 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:13:55.458216   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:13:55.458226   30150 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:13:55.458231   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:13:55.458238   30150 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:13:55.458246   30150 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:13:55.458252   30150 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:13:55.458262   30150 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:13:55.458286   30150 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:13:55.458298   30150 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:13:55.458310   30150 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:13:55.458321   30150 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:13:55.458328   30150 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:13:55.458335   30150 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:13:55.458341   30150 command_runner.go:130] > # Example:
	I1205 20:13:55.458346   30150 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:13:55.458353   30150 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:13:55.458358   30150 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:13:55.458366   30150 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:13:55.458369   30150 command_runner.go:130] > # cpuset = 0
	I1205 20:13:55.458373   30150 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:13:55.458382   30150 command_runner.go:130] > # Where:
	I1205 20:13:55.458387   30150 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:13:55.458393   30150 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:13:55.458401   30150 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:13:55.458407   30150 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:13:55.458419   30150 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:13:55.458428   30150 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:13:55.458431   30150 command_runner.go:130] > # 
	I1205 20:13:55.458437   30150 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:13:55.458443   30150 command_runner.go:130] > #
	I1205 20:13:55.458449   30150 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:13:55.458455   30150 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:13:55.458461   30150 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:13:55.458470   30150 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:13:55.458475   30150 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:13:55.458481   30150 command_runner.go:130] > [crio.image]
	I1205 20:13:55.458487   30150 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:13:55.458494   30150 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:13:55.458500   30150 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:13:55.458508   30150 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:13:55.458515   30150 command_runner.go:130] > # global_auth_file = ""
	I1205 20:13:55.458520   30150 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:13:55.458529   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:13:55.458539   30150 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:13:55.458546   30150 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:13:55.458554   30150 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:13:55.458559   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:13:55.458563   30150 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:13:55.458571   30150 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:13:55.458577   30150 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:13:55.458585   30150 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:13:55.458591   30150 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:13:55.458598   30150 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:13:55.458604   30150 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:13:55.458613   30150 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:13:55.458619   30150 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:13:55.458627   30150 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:13:55.458632   30150 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:13:55.458636   30150 command_runner.go:130] > # signature_policy = ""
	I1205 20:13:55.458642   30150 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:13:55.458648   30150 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:13:55.458654   30150 command_runner.go:130] > # changing them here.
	I1205 20:13:55.458657   30150 command_runner.go:130] > # insecure_registries = [
	I1205 20:13:55.458661   30150 command_runner.go:130] > # ]
	I1205 20:13:55.458669   30150 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:13:55.458674   30150 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:13:55.458677   30150 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:13:55.458682   30150 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:13:55.458686   30150 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:13:55.458692   30150 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:13:55.458695   30150 command_runner.go:130] > # CNI plugins.
	I1205 20:13:55.458699   30150 command_runner.go:130] > [crio.network]
	I1205 20:13:55.458704   30150 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:13:55.458709   30150 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:13:55.458713   30150 command_runner.go:130] > # cni_default_network = ""
	I1205 20:13:55.458719   30150 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:13:55.458723   30150 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:13:55.458728   30150 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:13:55.458732   30150 command_runner.go:130] > # plugin_dirs = [
	I1205 20:13:55.458738   30150 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:13:55.458742   30150 command_runner.go:130] > # ]
	I1205 20:13:55.458749   30150 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:13:55.458753   30150 command_runner.go:130] > [crio.metrics]
	I1205 20:13:55.458758   30150 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:13:55.458762   30150 command_runner.go:130] > enable_metrics = true
	I1205 20:13:55.458766   30150 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:13:55.458770   30150 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:13:55.458776   30150 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:13:55.458782   30150 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:13:55.458787   30150 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:13:55.458791   30150 command_runner.go:130] > # metrics_collectors = [
	I1205 20:13:55.458795   30150 command_runner.go:130] > # 	"operations",
	I1205 20:13:55.458799   30150 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:13:55.458804   30150 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:13:55.458807   30150 command_runner.go:130] > # 	"operations_errors",
	I1205 20:13:55.458814   30150 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:13:55.458818   30150 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:13:55.458826   30150 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:13:55.458830   30150 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:13:55.458836   30150 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:13:55.458840   30150 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:13:55.458844   30150 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:13:55.458850   30150 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:13:55.458854   30150 command_runner.go:130] > # 	"containers_oom",
	I1205 20:13:55.458858   30150 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:13:55.458865   30150 command_runner.go:130] > # 	"operations_total",
	I1205 20:13:55.458869   30150 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:13:55.458874   30150 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:13:55.458878   30150 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:13:55.458882   30150 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:13:55.458889   30150 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:13:55.458893   30150 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:13:55.458898   30150 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:13:55.458903   30150 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:13:55.458910   30150 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:13:55.458916   30150 command_runner.go:130] > # ]
	I1205 20:13:55.458923   30150 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:13:55.458928   30150 command_runner.go:130] > # metrics_port = 9090
	I1205 20:13:55.458935   30150 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:13:55.458943   30150 command_runner.go:130] > # metrics_socket = ""
	I1205 20:13:55.458950   30150 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:13:55.458956   30150 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:13:55.458964   30150 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:13:55.458969   30150 command_runner.go:130] > # certificate on any modification event.
	I1205 20:13:55.458975   30150 command_runner.go:130] > # metrics_cert = ""
	I1205 20:13:55.458980   30150 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:13:55.458989   30150 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:13:55.458997   30150 command_runner.go:130] > # metrics_key = ""
	I1205 20:13:55.459011   30150 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:13:55.459020   30150 command_runner.go:130] > [crio.tracing]
	I1205 20:13:55.459029   30150 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:13:55.459038   30150 command_runner.go:130] > # enable_tracing = false
	I1205 20:13:55.459047   30150 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:13:55.459060   30150 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:13:55.459071   30150 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:13:55.459082   30150 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:13:55.459092   30150 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:13:55.459101   30150 command_runner.go:130] > [crio.stats]
	I1205 20:13:55.459115   30150 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:13:55.459126   30150 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:13:55.459133   30150 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:13:55.459215   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:13:55.459227   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:13:55.459245   30150 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:13:55.459276   30150 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-558947 NodeName:multinode-558947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:13:55.459403   30150 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-558947"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:13:55.459461   30150 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-558947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:13:55.459510   30150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:13:55.469410   30150 command_runner.go:130] > kubeadm
	I1205 20:13:55.469431   30150 command_runner.go:130] > kubectl
	I1205 20:13:55.469438   30150 command_runner.go:130] > kubelet
	I1205 20:13:55.469463   30150 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:13:55.469510   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:13:55.479802   30150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1205 20:13:55.495959   30150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:13:55.511659   30150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1205 20:13:55.528151   30150 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I1205 20:13:55.531918   30150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:13:55.543496   30150 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947 for IP: 192.168.39.3
	I1205 20:13:55.543543   30150 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:13:55.543696   30150 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:13:55.543737   30150 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:13:55.543798   30150 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key
	I1205 20:13:55.543864   30150 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key.599d509e
	I1205 20:13:55.543899   30150 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key
	I1205 20:13:55.543909   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:13:55.543922   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:13:55.543933   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:13:55.543950   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:13:55.543962   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:13:55.543974   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:13:55.543988   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:13:55.544002   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:13:55.544073   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:13:55.544111   30150 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:13:55.544126   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:13:55.544160   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:13:55.544192   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:13:55.544213   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:13:55.544250   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:13:55.544293   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem -> /usr/share/ca-certificates/13410.pem
	I1205 20:13:55.544311   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /usr/share/ca-certificates/134102.pem
	I1205 20:13:55.544323   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:13:55.544831   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:13:55.568965   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:13:55.591421   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:13:55.613575   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:13:55.637624   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:13:55.661445   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:13:55.685109   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:13:55.707911   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:13:55.731359   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:13:55.753712   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:13:55.776377   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:13:55.798691   30150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:13:55.814815   30150 ssh_runner.go:195] Run: openssl version
	I1205 20:13:55.820334   30150 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1205 20:13:55.820396   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:13:55.830019   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:13:55.834254   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:13:55.834370   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:13:55.834436   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:13:55.839706   30150 command_runner.go:130] > 51391683
	I1205 20:13:55.840061   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:13:55.849465   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:13:55.859800   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:13:55.864190   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:13:55.864525   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:13:55.864585   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:13:55.870035   30150 command_runner.go:130] > 3ec20f2e
	I1205 20:13:55.870249   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:13:55.880224   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:13:55.889876   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:13:55.894402   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:13:55.894789   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:13:55.894842   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:13:55.900370   30150 command_runner.go:130] > b5213941
	I1205 20:13:55.900767   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:13:55.910770   30150 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:13:55.915160   30150 command_runner.go:130] > ca.crt
	I1205 20:13:55.915175   30150 command_runner.go:130] > ca.key
	I1205 20:13:55.915180   30150 command_runner.go:130] > healthcheck-client.crt
	I1205 20:13:55.915184   30150 command_runner.go:130] > healthcheck-client.key
	I1205 20:13:55.915189   30150 command_runner.go:130] > peer.crt
	I1205 20:13:55.915192   30150 command_runner.go:130] > peer.key
	I1205 20:13:55.915196   30150 command_runner.go:130] > server.crt
	I1205 20:13:55.915200   30150 command_runner.go:130] > server.key
	I1205 20:13:55.915238   30150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:13:55.921315   30150 command_runner.go:130] > Certificate will not expire
	I1205 20:13:55.921374   30150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:13:55.927105   30150 command_runner.go:130] > Certificate will not expire
	I1205 20:13:55.927164   30150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:13:55.933013   30150 command_runner.go:130] > Certificate will not expire
	I1205 20:13:55.933087   30150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:13:55.938773   30150 command_runner.go:130] > Certificate will not expire
	I1205 20:13:55.938936   30150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:13:55.944718   30150 command_runner.go:130] > Certificate will not expire
	I1205 20:13:55.945005   30150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:13:55.951009   30150 command_runner.go:130] > Certificate will not expire
	I1205 20:13:55.951063   30150 kubeadm.go:404] StartCluster: {Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:13:55.951170   30150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:13:55.951211   30150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:13:55.992247   30150 cri.go:89] found id: ""
	I1205 20:13:55.992314   30150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:13:56.001893   30150 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1205 20:13:56.001915   30150 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1205 20:13:56.001923   30150 command_runner.go:130] > /var/lib/minikube/etcd:
	I1205 20:13:56.001928   30150 command_runner.go:130] > member
	I1205 20:13:56.001999   30150 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:13:56.002011   30150 kubeadm.go:636] restartCluster start
	I1205 20:13:56.002067   30150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:13:56.011554   30150 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:56.012213   30150 kubeconfig.go:92] found "multinode-558947" server: "https://192.168.39.3:8443"
	I1205 20:13:56.012889   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:13:56.013244   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:13:56.013973   30150 cert_rotation.go:137] Starting client certificate rotation controller
	I1205 20:13:56.014198   30150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:13:56.022976   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:56.023047   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:56.034166   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:56.034186   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:56.034229   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:56.044766   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:56.545480   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:56.545561   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:56.557315   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:57.044960   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:57.045044   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:57.056687   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:57.545232   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:57.545337   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:57.557335   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:58.044906   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:58.044992   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:58.056190   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:58.545826   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:58.545915   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:58.557211   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:59.045781   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:59.045867   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:59.057226   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:13:59.545878   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:13:59.545944   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:13:59.557463   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:00.045443   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:00.045521   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:00.057279   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:00.544824   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:00.544916   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:00.555856   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:01.045423   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:01.045506   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:01.056765   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:01.545311   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:01.545400   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:01.557731   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:02.045510   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:02.045595   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:02.056988   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:02.545514   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:02.545607   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:02.557062   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:03.045677   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:03.045752   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:03.056877   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:03.545441   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:03.545542   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:03.557205   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:04.045812   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:04.045891   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:04.057202   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:04.545802   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:04.545874   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:04.557482   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:05.045538   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:05.045633   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:05.057313   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:05.544886   30150 api_server.go:166] Checking apiserver status ...
	I1205 20:14:05.544970   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:14:05.556288   30150 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:14:06.023035   30150 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:14:06.023092   30150 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:14:06.023107   30150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:14:06.023164   30150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:14:06.061764   30150 cri.go:89] found id: ""
	I1205 20:14:06.061841   30150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:14:06.076930   30150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:14:06.086519   30150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1205 20:14:06.086538   30150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1205 20:14:06.086548   30150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1205 20:14:06.086555   30150 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:14:06.086771   30150 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:14:06.086827   30150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:14:06.095108   30150 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:14:06.095137   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:14:06.212029   30150 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:14:06.212050   30150 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1205 20:14:06.212480   30150 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1205 20:14:06.213583   30150 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:14:06.214542   30150 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1205 20:14:06.215133   30150 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:14:06.216028   30150 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1205 20:14:06.216542   30150 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1205 20:14:06.217229   30150 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:14:06.217536   30150 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:14:06.217964   30150 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:14:06.218627   30150 command_runner.go:130] > [certs] Using the existing "sa" key
	I1205 20:14:06.219966   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:14:06.269217   30150 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:14:06.421073   30150 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:14:06.644564   30150 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:14:06.779955   30150 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:14:06.848023   30150 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:14:06.851110   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:14:07.053934   30150 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:14:07.053968   30150 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:14:07.053974   30150 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:14:07.053996   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:14:07.138946   30150 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:14:07.138970   30150 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:14:07.138977   30150 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:14:07.138984   30150 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:14:07.139019   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:14:07.212319   30150 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:14:07.215896   30150 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:14:07.215968   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:07.241061   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:07.759530   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:08.259243   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:08.759906   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:09.259005   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:09.759207   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:09.787672   30150 command_runner.go:130] > 1129
	I1205 20:14:09.787942   30150 api_server.go:72] duration metric: took 2.57204763s to wait for apiserver process to appear ...
	I1205 20:14:09.787962   30150 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:14:09.787980   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:09.788599   30150 api_server.go:269] stopped: https://192.168.39.3:8443/healthz: Get "https://192.168.39.3:8443/healthz": dial tcp 192.168.39.3:8443: connect: connection refused
	I1205 20:14:09.788629   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:09.789034   30150 api_server.go:269] stopped: https://192.168.39.3:8443/healthz: Get "https://192.168.39.3:8443/healthz": dial tcp 192.168.39.3:8443: connect: connection refused
	I1205 20:14:10.289747   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:13.903021   30150 api_server.go:279] https://192.168.39.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:14:13.903048   30150 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:14:13.903060   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:13.952661   30150 api_server.go:279] https://192.168.39.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:14:13.952690   30150 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:14:14.290127   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:14.300830   30150 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:14:14.300862   30150 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:14:14.789876   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:14.795429   30150 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:14:14.795459   30150 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:14:15.290096   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:15.296860   30150 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I1205 20:14:15.296951   30150 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I1205 20:14:15.296962   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:15.296975   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:15.296985   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:15.308790   30150 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 20:14:15.308823   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:15.308833   30150 round_trippers.go:580]     Content-Length: 264
	I1205 20:14:15.308842   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:15 GMT
	I1205 20:14:15.308850   30150 round_trippers.go:580]     Audit-Id: 4c044d9a-795f-47bb-b65c-2e5fb8e5fd13
	I1205 20:14:15.308857   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:15.308865   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:15.308881   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:15.308893   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:15.308935   30150 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1205 20:14:15.309042   30150 api_server.go:141] control plane version: v1.28.4
	I1205 20:14:15.309067   30150 api_server.go:131] duration metric: took 5.521096542s to wait for apiserver health ...
	I1205 20:14:15.309081   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:14:15.309090   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:14:15.311225   30150 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:14:15.312833   30150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:14:15.350798   30150 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:14:15.350825   30150 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1205 20:14:15.350835   30150 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1205 20:14:15.350844   30150 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:14:15.350853   30150 command_runner.go:130] > Access: 2023-12-05 20:13:42.646892622 +0000
	I1205 20:14:15.350861   30150 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1205 20:14:15.350869   30150 command_runner.go:130] > Change: 2023-12-05 20:13:40.685892622 +0000
	I1205 20:14:15.350874   30150 command_runner.go:130] >  Birth: -
	I1205 20:14:15.351063   30150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:14:15.351098   30150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:14:15.412512   30150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:14:16.474311   30150 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:14:16.484614   30150 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:14:16.488527   30150 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1205 20:14:16.508991   30150 command_runner.go:130] > daemonset.apps/kindnet configured
	I1205 20:14:16.511742   30150 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.099195294s)
	I1205 20:14:16.511769   30150 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:14:16.511879   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:16.511890   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.511898   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.511904   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.516312   30150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:14:16.516339   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.516348   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.516356   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.516362   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.516369   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.516378   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.516385   30150 round_trippers.go:580]     Audit-Id: 7a56a3e6-ceb1-4589-a24c-ce2b2e8e3e9a
	I1205 20:14:16.517740   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"784"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82558 chars]
	I1205 20:14:16.521687   30150 system_pods.go:59] 12 kube-system pods found
	I1205 20:14:16.521723   30150 system_pods.go:61] "coredns-5dd5756b68-knl4d" [28d6c367-593c-469a-90c6-b3c13cedc3df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:14:16.521733   30150 system_pods.go:61] "etcd-multinode-558947" [118e2032-1898-42c0-9aa2-3f15356e9ff3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:14:16.521741   30150 system_pods.go:61] "kindnet-7dnjd" [f957ff7c-baef-49a4-83cb-db708a3f1017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 20:14:16.521747   30150 system_pods.go:61] "kindnet-cv76g" [88acd23e-99f5-4c5f-a03c-1c961a511eac] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 20:14:16.521754   30150 system_pods.go:61] "kindnet-xcs7j" [c86c9a0d-7018-41d4-9bf2-60262f1a66e6] Running
	I1205 20:14:16.521763   30150 system_pods.go:61] "kube-apiserver-multinode-558947" [36300192-b165-4bee-b791-9fce329428f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:14:16.521776   30150 system_pods.go:61] "kube-controller-manager-multinode-558947" [49ee6fa8-b7cd-4880-b4db-a1717b685750] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:14:16.521792   30150 system_pods.go:61] "kube-proxy-kjph8" [05167608-ef4c-4bac-b57b-0330ab4cef76] Running
	I1205 20:14:16.521799   30150 system_pods.go:61] "kube-proxy-mgmt2" [41275cfd-cb0f-4886-b1bc-a86b7e20cc14] Running
	I1205 20:14:16.521804   30150 system_pods.go:61] "kube-proxy-xvjj7" [19641919-0011-4726-b884-cc468d0f2dd0] Running
	I1205 20:14:16.521812   30150 system_pods.go:61] "kube-scheduler-multinode-558947" [526e311f-432f-4c9a-ad6e-19855cae55be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:14:16.521817   30150 system_pods.go:61] "storage-provisioner" [58d4c242-7ea5-49f5-999c-3c9135144038] Running
	I1205 20:14:16.521823   30150 system_pods.go:74] duration metric: took 10.048296ms to wait for pod list to return data ...
	I1205 20:14:16.521832   30150 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:14:16.521889   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I1205 20:14:16.521901   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.521909   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.521917   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.524900   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:16.524921   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.524930   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.524937   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.524944   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.524958   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.524966   30150 round_trippers.go:580]     Audit-Id: c473b579-b215-4601-a1db-10612092c59b
	I1205 20:14:16.524976   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.525190   30150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"784"},"items":[{"metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16353 chars]
	I1205 20:14:16.526168   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:14:16.526194   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:14:16.526204   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:14:16.526208   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:14:16.526212   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:14:16.526216   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:14:16.526220   30150 node_conditions.go:105] duration metric: took 4.384017ms to run NodePressure ...
	I1205 20:14:16.526239   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:14:16.738301   30150 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1205 20:14:16.738325   30150 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1205 20:14:16.738347   30150 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:14:16.738431   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1205 20:14:16.738441   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.738451   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.738460   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.742401   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:16.742419   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.742426   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.742431   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.742436   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.742441   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.742446   30150 round_trippers.go:580]     Audit-Id: b285c66b-8484-4c61-a99b-b70305c3163b
	I1205 20:14:16.742454   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.742942   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"786"},"items":[{"metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"775","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 28832 chars]
	I1205 20:14:16.743888   30150 kubeadm.go:787] kubelet initialised
	I1205 20:14:16.743908   30150 kubeadm.go:788] duration metric: took 5.549568ms waiting for restarted kubelet to initialise ...
	I1205 20:14:16.743914   30150 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:14:16.743967   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:16.743975   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.743982   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.743988   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.749373   30150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:14:16.749388   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.749394   30150 round_trippers.go:580]     Audit-Id: 6b593c74-317d-4c2b-9262-3e5ff2b98466
	I1205 20:14:16.749400   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.749405   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.749410   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.749414   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.749421   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.750539   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"786"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82558 chars]
	I1205 20:14:16.753898   30150 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:16.753961   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:16.753974   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.753982   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.753990   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.755969   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:16.755991   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.756000   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.756008   30150 round_trippers.go:580]     Audit-Id: 04c1c71f-3e05-4c39-9d24-f99ac3c9bec8
	I1205 20:14:16.756015   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.756029   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.756037   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.756047   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.756203   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:16.756584   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:16.756595   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.756602   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.756608   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.758372   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:16.758391   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.758401   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.758410   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.758421   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.758428   30150 round_trippers.go:580]     Audit-Id: 0067896f-93a7-49b6-ac74-790a89df6a04
	I1205 20:14:16.758438   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.758448   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.758619   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:16.758944   30150 pod_ready.go:97] node "multinode-558947" hosting pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.758971   30150 pod_ready.go:81] duration metric: took 5.053841ms waiting for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	E1205 20:14:16.758989   30150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-558947" hosting pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.759006   30150 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:16.759060   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-558947
	I1205 20:14:16.759072   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.759086   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.759098   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.760981   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:16.760998   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.761008   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.761015   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.761036   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.761048   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.761059   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.761066   30150 round_trippers.go:580]     Audit-Id: e197877a-c9bf-4455-822f-d6c7275098f7
	I1205 20:14:16.761275   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"775","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6057 chars]
	I1205 20:14:16.761706   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:16.761720   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.761726   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.761732   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.763481   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:16.763494   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.763500   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.763510   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.763519   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.763537   30150 round_trippers.go:580]     Audit-Id: 6740e208-c04d-4173-b147-a496c61bb232
	I1205 20:14:16.763545   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.763553   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.763963   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:16.764348   30150 pod_ready.go:97] node "multinode-558947" hosting pod "etcd-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.764368   30150 pod_ready.go:81] duration metric: took 5.351288ms waiting for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	E1205 20:14:16.764378   30150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-558947" hosting pod "etcd-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.764398   30150 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:16.764463   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-558947
	I1205 20:14:16.764473   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.764482   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.764494   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.766309   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:16.766321   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.766326   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.766332   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.766341   30150 round_trippers.go:580]     Audit-Id: 7a1ecc32-bee4-4537-9937-4fad8729d680
	I1205 20:14:16.766349   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.766356   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.766372   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.766608   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-558947","namespace":"kube-system","uid":"36300192-b165-4bee-b791-9fce329428f9","resourceVersion":"776","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.3:8443","kubernetes.io/config.hash":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.mirror":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.seen":"2023-12-05T20:03:56.146037812Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7615 chars]
	I1205 20:14:16.767060   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:16.767076   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.767086   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.767096   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.769636   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:16.769658   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.769668   30150 round_trippers.go:580]     Audit-Id: d6d84ef9-58c0-47f3-92ff-2d3e8eb6a70a
	I1205 20:14:16.769676   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.769685   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.769693   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.769701   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.769714   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.769942   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:16.770430   30150 pod_ready.go:97] node "multinode-558947" hosting pod "kube-apiserver-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.770476   30150 pod_ready.go:81] duration metric: took 6.063376ms waiting for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	E1205 20:14:16.770488   30150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-558947" hosting pod "kube-apiserver-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.770496   30150 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:16.770542   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-558947
	I1205 20:14:16.770551   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.770558   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.770568   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.772548   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:16.772561   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.772573   30150 round_trippers.go:580]     Audit-Id: 118e6b55-8540-4fe0-98d2-95a3f413392c
	I1205 20:14:16.772578   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.772586   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.772597   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.772611   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.772620   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.772927   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-558947","namespace":"kube-system","uid":"49ee6fa8-b7cd-4880-b4db-a1717b685750","resourceVersion":"771","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.mirror":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.seen":"2023-12-05T20:03:56.146038937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7208 chars]
	I1205 20:14:16.912664   30150 request.go:629] Waited for 139.359983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:16.912759   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:16.912766   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:16.912773   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:16.912783   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:16.915406   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:16.915428   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:16.915441   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:16 GMT
	I1205 20:14:16.915448   30150 round_trippers.go:580]     Audit-Id: 4f7cd73b-8b53-4e6e-ad0d-006c78d511da
	I1205 20:14:16.915456   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:16.915465   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:16.915481   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:16.915496   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:16.915614   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:16.915925   30150 pod_ready.go:97] node "multinode-558947" hosting pod "kube-controller-manager-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.915945   30150 pod_ready.go:81] duration metric: took 145.439136ms waiting for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	E1205 20:14:16.915957   30150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-558947" hosting pod "kube-controller-manager-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:16.915966   30150 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:17.112422   30150 request.go:629] Waited for 196.389474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:14:17.112497   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:14:17.112502   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:17.112510   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:17.112523   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:17.115201   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:17.115222   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:17.115231   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:17 GMT
	I1205 20:14:17.115238   30150 round_trippers.go:580]     Audit-Id: 668089c0-4378-4a17-95aa-f4ce74f8b047
	I1205 20:14:17.115247   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:17.115256   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:17.115274   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:17.115286   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:17.115658   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"517","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1205 20:14:17.312404   30150 request.go:629] Waited for 196.351309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:14:17.312477   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:14:17.312486   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:17.312497   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:17.312511   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:17.316487   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:17.316514   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:17.316525   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:17.316533   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:17 GMT
	I1205 20:14:17.316541   30150 round_trippers.go:580]     Audit-Id: 8b63d693-a354-4a6c-b2a6-688011dbe608
	I1205 20:14:17.316549   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:17.316566   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:17.316574   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:17.316725   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"751","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_06_21_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I1205 20:14:17.317020   30150 pod_ready.go:92] pod "kube-proxy-kjph8" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:17.317038   30150 pod_ready.go:81] duration metric: took 401.057859ms waiting for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:17.317052   30150 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:17.512532   30150 request.go:629] Waited for 195.410686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:14:17.512583   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:14:17.512588   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:17.512595   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:17.512606   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:17.515241   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:17.515262   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:17.515274   30150 round_trippers.go:580]     Audit-Id: 57b18bf4-7545-424f-a4a9-8016307d32e5
	I1205 20:14:17.515287   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:17.515294   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:17.515335   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:17.515350   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:17.515363   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:17 GMT
	I1205 20:14:17.515582   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgmt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"41275cfd-cb0f-4886-b1bc-a86b7e20cc14","resourceVersion":"783","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:14:17.712327   30150 request.go:629] Waited for 196.362467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:17.712409   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:17.712414   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:17.712422   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:17.712428   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:17.714917   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:17.714936   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:17.714944   30150 round_trippers.go:580]     Audit-Id: 142e896c-82bb-4b63-8631-b49aeccf1f66
	I1205 20:14:17.714949   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:17.714955   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:17.714960   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:17.714964   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:17.714969   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:17 GMT
	I1205 20:14:17.715289   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:17.715579   30150 pod_ready.go:97] node "multinode-558947" hosting pod "kube-proxy-mgmt2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:17.715595   30150 pod_ready.go:81] duration metric: took 398.537178ms waiting for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	E1205 20:14:17.715603   30150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-558947" hosting pod "kube-proxy-mgmt2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:17.715609   30150 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:17.911975   30150 request.go:629] Waited for 196.30647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:14:17.912041   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:14:17.912047   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:17.912055   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:17.912062   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:17.914578   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:17.914598   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:17.914611   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:17 GMT
	I1205 20:14:17.914617   30150 round_trippers.go:580]     Audit-Id: eb447f42-10e4-4db2-99ba-25876af8efa5
	I1205 20:14:17.914622   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:17.914629   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:17.914635   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:17.914640   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:17.915035   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xvjj7","generateName":"kube-proxy-","namespace":"kube-system","uid":"19641919-0011-4726-b884-cc468d0f2dd0","resourceVersion":"724","creationTimestamp":"2023-12-05T20:05:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1205 20:14:18.112840   30150 request.go:629] Waited for 197.401164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:14:18.112891   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:14:18.112896   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:18.112904   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:18.112910   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:18.115375   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:18.115399   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:18.115410   30150 round_trippers.go:580]     Audit-Id: a98ded7e-23cb-45df-b841-3950b158ac51
	I1205 20:14:18.115419   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:18.115435   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:18.115443   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:18.115451   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:18.115462   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:18 GMT
	I1205 20:14:18.115672   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m03","uid":"b3bc91db-0091-4e00-86c1-c071017fca0a","resourceVersion":"744","creationTimestamp":"2023-12-05T20:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_06_21_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1205 20:14:18.115938   30150 pod_ready.go:92] pod "kube-proxy-xvjj7" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:18.115952   30150 pod_ready.go:81] duration metric: took 400.338197ms waiting for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:18.115966   30150 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:18.312449   30150 request.go:629] Waited for 196.409094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:14:18.312531   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:14:18.312542   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:18.312556   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:18.312568   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:18.315248   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:18.315268   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:18.315275   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:18 GMT
	I1205 20:14:18.315281   30150 round_trippers.go:580]     Audit-Id: 0e5a4ffb-2c19-415c-9357-31a84155111e
	I1205 20:14:18.315286   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:18.315293   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:18.315300   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:18.315311   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:18.315546   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-558947","namespace":"kube-system","uid":"526e311f-432f-4c9a-ad6e-19855cae55be","resourceVersion":"772","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.mirror":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.seen":"2023-12-05T20:03:56.146039635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4920 chars]
	I1205 20:14:18.512249   30150 request.go:629] Waited for 196.361874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:18.512335   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:18.512343   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:18.512354   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:18.512364   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:18.514801   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:18.514818   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:18.514825   30150 round_trippers.go:580]     Audit-Id: 68b339c5-9545-41a1-955d-cd5b888466b7
	I1205 20:14:18.514830   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:18.514835   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:18.514840   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:18.514845   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:18.514850   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:18 GMT
	I1205 20:14:18.515047   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:18.515341   30150 pod_ready.go:97] node "multinode-558947" hosting pod "kube-scheduler-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:18.515356   30150 pod_ready.go:81] duration metric: took 399.38448ms waiting for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	E1205 20:14:18.515365   30150 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-558947" hosting pod "kube-scheduler-multinode-558947" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-558947" has status "Ready":"False"
	I1205 20:14:18.515374   30150 pod_ready.go:38] duration metric: took 1.771444678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:14:18.515397   30150 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:14:18.551714   30150 command_runner.go:130] > -16
	I1205 20:14:18.553718   30150 ops.go:34] apiserver oom_adj: -16
	I1205 20:14:18.553735   30150 kubeadm.go:640] restartCluster took 22.551717931s
	I1205 20:14:18.553742   30150 kubeadm.go:406] StartCluster complete in 22.602681973s
	I1205 20:14:18.553756   30150 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:18.553820   30150 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:14:18.554553   30150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:14:18.554802   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:14:18.554928   30150 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:14:18.557998   30150 out.go:177] * Enabled addons: 
	I1205 20:14:18.555108   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:14:18.559475   30150 addons.go:502] enable addons completed in 4.545174ms: enabled=[]
	I1205 20:14:18.555124   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:14:18.559870   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:14:18.560250   30150 round_trippers.go:463] GET https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:14:18.560270   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:18.560280   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:18.560289   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:18.563480   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:18.563500   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:18.563511   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:18 GMT
	I1205 20:14:18.563519   30150 round_trippers.go:580]     Audit-Id: 3090b6fb-aaf5-42d4-8907-6a015d6fd139
	I1205 20:14:18.563531   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:18.563543   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:18.563553   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:18.563564   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:18.563584   30150 round_trippers.go:580]     Content-Length: 291
	I1205 20:14:18.563651   30150 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"785","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1205 20:14:18.563879   30150 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-558947" context rescaled to 1 replicas
	I1205 20:14:18.563915   30150 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:14:18.565653   30150 out.go:177] * Verifying Kubernetes components...
	I1205 20:14:18.567049   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:14:18.736797   30150 command_runner.go:130] > apiVersion: v1
	I1205 20:14:18.736822   30150 command_runner.go:130] > data:
	I1205 20:14:18.736829   30150 command_runner.go:130] >   Corefile: |
	I1205 20:14:18.736835   30150 command_runner.go:130] >     .:53 {
	I1205 20:14:18.736841   30150 command_runner.go:130] >         log
	I1205 20:14:18.736849   30150 command_runner.go:130] >         errors
	I1205 20:14:18.736856   30150 command_runner.go:130] >         health {
	I1205 20:14:18.736868   30150 command_runner.go:130] >            lameduck 5s
	I1205 20:14:18.736882   30150 command_runner.go:130] >         }
	I1205 20:14:18.736890   30150 command_runner.go:130] >         ready
	I1205 20:14:18.736902   30150 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1205 20:14:18.736911   30150 command_runner.go:130] >            pods insecure
	I1205 20:14:18.736919   30150 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1205 20:14:18.736929   30150 command_runner.go:130] >            ttl 30
	I1205 20:14:18.736945   30150 command_runner.go:130] >         }
	I1205 20:14:18.736955   30150 command_runner.go:130] >         prometheus :9153
	I1205 20:14:18.736965   30150 command_runner.go:130] >         hosts {
	I1205 20:14:18.736973   30150 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1205 20:14:18.736982   30150 command_runner.go:130] >            fallthrough
	I1205 20:14:18.737013   30150 command_runner.go:130] >         }
	I1205 20:14:18.737030   30150 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1205 20:14:18.737037   30150 command_runner.go:130] >            max_concurrent 1000
	I1205 20:14:18.737045   30150 command_runner.go:130] >         }
	I1205 20:14:18.737051   30150 command_runner.go:130] >         cache 30
	I1205 20:14:18.737064   30150 command_runner.go:130] >         loop
	I1205 20:14:18.737073   30150 command_runner.go:130] >         reload
	I1205 20:14:18.737079   30150 command_runner.go:130] >         loadbalance
	I1205 20:14:18.737090   30150 command_runner.go:130] >     }
	I1205 20:14:18.737099   30150 command_runner.go:130] > kind: ConfigMap
	I1205 20:14:18.737106   30150 command_runner.go:130] > metadata:
	I1205 20:14:18.737116   30150 command_runner.go:130] >   creationTimestamp: "2023-12-05T20:03:55Z"
	I1205 20:14:18.737125   30150 command_runner.go:130] >   name: coredns
	I1205 20:14:18.737139   30150 command_runner.go:130] >   namespace: kube-system
	I1205 20:14:18.737149   30150 command_runner.go:130] >   resourceVersion: "398"
	I1205 20:14:18.737165   30150 command_runner.go:130] >   uid: 91b078ea-72c0-4b91-95c4-879eb6cb01d7
	I1205 20:14:18.741917   30150 node_ready.go:35] waiting up to 6m0s for node "multinode-558947" to be "Ready" ...
	I1205 20:14:18.742047   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:18.742063   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:18.742074   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:18.742081   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:18.742214   30150 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:14:18.753276   30150 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1205 20:14:18.753305   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:18.753312   30150 round_trippers.go:580]     Audit-Id: 6690e976-f068-40ed-a527-47314187fe6e
	I1205 20:14:18.753318   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:18.753323   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:18.753328   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:18.753333   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:18.753341   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:18 GMT
	I1205 20:14:18.754131   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:18.912828   30150 request.go:629] Waited for 158.301744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:18.912920   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:18.912938   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:18.912949   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:18.912965   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:18.916010   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:18.916030   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:18.916039   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:18.916047   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:18 GMT
	I1205 20:14:18.916054   30150 round_trippers.go:580]     Audit-Id: 649f9b2d-65c6-41a7-bef0-72f10fccab8f
	I1205 20:14:18.916062   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:18.916069   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:18.916086   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:18.916424   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:19.417648   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:19.417680   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:19.417693   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:19.417703   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:19.420205   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:19.420223   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:19.420230   30150 round_trippers.go:580]     Audit-Id: 4771c623-db2b-4eb0-9b35-45b40cbc949c
	I1205 20:14:19.420236   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:19.420241   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:19.420246   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:19.420251   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:19.420256   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:19 GMT
	I1205 20:14:19.420762   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:19.917924   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:19.917947   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:19.917956   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:19.917962   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:19.920571   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:19.920591   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:19.920598   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:19 GMT
	I1205 20:14:19.920603   30150 round_trippers.go:580]     Audit-Id: 653c90a2-6a6f-4f0e-961c-f51ef25b1084
	I1205 20:14:19.920608   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:19.920613   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:19.920622   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:19.920627   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:19.920877   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:20.417369   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:20.417392   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:20.417401   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:20.417407   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:20.421380   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:20.421402   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:20.421409   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:20.421414   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:20.421420   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:20.421425   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:20 GMT
	I1205 20:14:20.421432   30150 round_trippers.go:580]     Audit-Id: 5f7bc34e-fa86-4692-bdc6-5bf1e72ffcb7
	I1205 20:14:20.421437   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:20.421766   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:20.917373   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:20.917398   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:20.917417   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:20.917424   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:20.920202   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:20.920223   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:20.920237   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:20.920245   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:20.920254   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:20.920270   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:20 GMT
	I1205 20:14:20.920294   30150 round_trippers.go:580]     Audit-Id: e4c97ebe-05a6-47ba-8a1c-33bde1aad8d5
	I1205 20:14:20.920304   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:20.920571   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:20.920942   30150 node_ready.go:58] node "multinode-558947" has status "Ready":"False"
	I1205 20:14:21.417202   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:21.417232   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:21.417245   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:21.417255   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:21.420023   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:21.420051   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:21.420060   30150 round_trippers.go:580]     Audit-Id: 9c5dc318-2d70-4a71-9427-67dab764fb71
	I1205 20:14:21.420071   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:21.420078   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:21.420085   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:21.420094   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:21.420102   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:21 GMT
	I1205 20:14:21.420221   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:21.917303   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:21.917335   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:21.917347   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:21.917355   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:21.920358   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:21.920388   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:21.920399   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:21 GMT
	I1205 20:14:21.920408   30150 round_trippers.go:580]     Audit-Id: 5f5a1557-744e-4255-9633-a871bfdc9496
	I1205 20:14:21.920417   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:21.920425   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:21.920441   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:21.920449   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:21.920773   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:22.417135   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:22.417163   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:22.417174   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:22.417182   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:22.419836   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:22.419854   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:22.419861   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:22.419866   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:22.419871   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:22.419877   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:22.419883   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:22 GMT
	I1205 20:14:22.419890   30150 round_trippers.go:580]     Audit-Id: 0031d3f9-aac1-4f86-9291-842aeb1c2c45
	I1205 20:14:22.420134   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:22.917860   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:22.917898   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:22.917913   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:22.917924   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:22.920869   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:22.920897   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:22.920908   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:22.920916   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:22.920927   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:22.920935   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:22.920945   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:22 GMT
	I1205 20:14:22.920955   30150 round_trippers.go:580]     Audit-Id: 54dee501-d64c-44ee-b3a2-72d2b93a997a
	I1205 20:14:22.921591   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:22.921921   30150 node_ready.go:58] node "multinode-558947" has status "Ready":"False"
	I1205 20:14:23.417241   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:23.417264   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:23.417271   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:23.417277   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:23.419945   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:23.419970   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:23.419981   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:23.419990   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:23 GMT
	I1205 20:14:23.420001   30150 round_trippers.go:580]     Audit-Id: f47be65d-2e21-4d00-af5e-46c63071c30d
	I1205 20:14:23.420011   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:23.420022   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:23.420033   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:23.420416   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:23.917048   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:23.917072   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:23.917080   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:23.917087   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:23.920512   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:23.920535   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:23.920542   30150 round_trippers.go:580]     Audit-Id: 1cc5e29f-efb2-443a-810c-4cde6204f0ff
	I1205 20:14:23.920547   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:23.920552   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:23.920557   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:23.920562   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:23.920567   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:23 GMT
	I1205 20:14:23.921026   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"753","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6116 chars]
	I1205 20:14:24.417787   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:24.417816   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.417828   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.417837   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.420866   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:24.420890   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.420897   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.420902   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.420907   30150 round_trippers.go:580]     Audit-Id: 31cf4889-4737-47dd-9b0e-26fb333c11b2
	I1205 20:14:24.420913   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.420920   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.420925   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.421498   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:24.421814   30150 node_ready.go:49] node "multinode-558947" has status "Ready":"True"
	I1205 20:14:24.421831   30150 node_ready.go:38] duration metric: took 5.679887591s waiting for node "multinode-558947" to be "Ready" ...
	I1205 20:14:24.421839   30150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:14:24.421905   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:24.421918   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.421927   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.421933   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.425268   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:24.425288   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.425298   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.425307   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.425315   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.425323   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.425330   30150 round_trippers.go:580]     Audit-Id: 8cfefaaa-d3f2-4ee3-b2e4-58d42edfe935
	I1205 20:14:24.425340   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.426704   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"880"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82633 chars]
	I1205 20:14:24.429149   30150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:24.429206   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:24.429213   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.429220   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.429227   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.431360   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:24.431380   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.431389   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.431397   30150 round_trippers.go:580]     Audit-Id: a9a2bdd1-c263-40f0-ab76-e69d6ede011e
	I1205 20:14:24.431413   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.431424   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.431438   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.431446   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.431572   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:24.431955   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:24.431967   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.431974   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.431982   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.433753   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:24.433770   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.433780   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.433788   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.433795   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.433807   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.433815   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.433824   30150 round_trippers.go:580]     Audit-Id: 95b3e33c-e3e5-40ad-b4a7-4e26739c07f9
	I1205 20:14:24.434162   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:24.434587   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:24.434601   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.434608   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.434614   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.437155   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:24.437183   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.437193   30150 round_trippers.go:580]     Audit-Id: cd479601-5cee-4b3f-9a53-34af1a94ccfb
	I1205 20:14:24.437201   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.437209   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.437218   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.437232   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.437248   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.437448   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:24.437879   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:24.437892   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.437901   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.437912   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.439653   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:24.439671   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.439680   30150 round_trippers.go:580]     Audit-Id: f93bcb34-8eb8-4e1a-82e8-600e87daa3e0
	I1205 20:14:24.439688   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.439699   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.439710   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.439717   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.439727   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.439995   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:24.941163   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:24.941190   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.941198   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.941204   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.944142   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:24.944164   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.944174   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.944182   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.944189   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.944197   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.944206   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.944215   30150 round_trippers.go:580]     Audit-Id: 55f494d6-0a33-45a6-b73e-a608c083f774
	I1205 20:14:24.944456   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:24.944923   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:24.944942   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:24.944949   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:24.944955   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:24.947120   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:24.947139   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:24.947151   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:24.947163   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:24.947170   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:24.947177   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:24 GMT
	I1205 20:14:24.947185   30150 round_trippers.go:580]     Audit-Id: 2c8c4363-63d7-44c9-9694-70b5ee4bfc4b
	I1205 20:14:24.947193   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:24.947790   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:25.441083   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:25.441106   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:25.441114   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:25.441120   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:25.444002   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:25.444022   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:25.444028   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:25.444034   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:25 GMT
	I1205 20:14:25.444039   30150 round_trippers.go:580]     Audit-Id: 833fc376-6b03-458f-b922-90c762ba09d6
	I1205 20:14:25.444044   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:25.444049   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:25.444054   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:25.444283   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:25.444730   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:25.444743   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:25.444750   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:25.444756   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:25.446798   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:25.446811   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:25.446817   30150 round_trippers.go:580]     Audit-Id: af97efce-c460-4d52-992f-e4d384d4d69f
	I1205 20:14:25.446822   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:25.446827   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:25.446832   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:25.446840   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:25.446845   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:25 GMT
	I1205 20:14:25.447068   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:25.940713   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:25.940740   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:25.940748   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:25.940754   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:25.943577   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:25.943597   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:25.943604   30150 round_trippers.go:580]     Audit-Id: 76c230b1-06d0-4ef7-b979-9cc03b53a12d
	I1205 20:14:25.943611   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:25.943618   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:25.943641   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:25.943653   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:25.943664   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:25 GMT
	I1205 20:14:25.944338   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:25.944774   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:25.944791   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:25.944797   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:25.944806   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:25.947539   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:25.947560   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:25.947568   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:25.947576   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:25.947585   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:25 GMT
	I1205 20:14:25.947593   30150 round_trippers.go:580]     Audit-Id: 98d2acbd-1456-4017-b755-cb76bd345e1d
	I1205 20:14:25.947606   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:25.947614   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:25.948066   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:26.440754   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:26.440777   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:26.440785   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:26.440794   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:26.443479   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:26.443509   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:26.443518   30150 round_trippers.go:580]     Audit-Id: e230fbff-f7af-4793-81a6-c084afe6aa59
	I1205 20:14:26.443526   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:26.443533   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:26.443544   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:26.443554   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:26.443561   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:26 GMT
	I1205 20:14:26.443830   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:26.444290   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:26.444305   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:26.444312   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:26.444318   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:26.446664   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:26.446684   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:26.446702   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:26.446719   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:26.446727   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:26 GMT
	I1205 20:14:26.446734   30150 round_trippers.go:580]     Audit-Id: 3e1be412-90cd-43ce-b609-9a5815519c7d
	I1205 20:14:26.446743   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:26.446751   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:26.447100   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:26.447472   30150 pod_ready.go:102] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"False"
	I1205 20:14:26.940842   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:26.940866   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:26.940875   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:26.940881   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:26.943428   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:26.943451   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:26.943461   30150 round_trippers.go:580]     Audit-Id: 9e5a2926-614b-42d5-a0ca-6bde5bbb5773
	I1205 20:14:26.943469   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:26.943477   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:26.943486   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:26.943496   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:26.943504   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:26 GMT
	I1205 20:14:26.943754   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:26.944308   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:26.944325   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:26.944333   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:26.944339   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:26.950674   30150 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:14:26.950703   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:26.950713   30150 round_trippers.go:580]     Audit-Id: b2d6183f-6b20-4bc4-8ed5-de8001953ffa
	I1205 20:14:26.950720   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:26.950727   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:26.950735   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:26.950742   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:26.950754   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:26 GMT
	I1205 20:14:26.951031   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:27.440681   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:27.440709   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:27.440717   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:27.440726   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:27.443811   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:27.443848   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:27.443856   30150 round_trippers.go:580]     Audit-Id: 75f844b9-66d4-45d1-928c-ec1846d1c440
	I1205 20:14:27.443861   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:27.443868   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:27.443875   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:27.443882   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:27.443893   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:27 GMT
	I1205 20:14:27.444592   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:27.445065   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:27.445080   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:27.445087   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:27.445098   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:27.447417   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:27.447441   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:27.447452   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:27.447459   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:27.447466   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:27.447474   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:27.447481   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:27 GMT
	I1205 20:14:27.447489   30150 round_trippers.go:580]     Audit-Id: 557bb734-d491-4db4-8264-395a655b083a
	I1205 20:14:27.447846   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:27.940464   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:27.940491   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:27.940499   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:27.940505   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:27.943621   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:27.943639   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:27.943647   30150 round_trippers.go:580]     Audit-Id: 7c9a0864-4247-4aea-a139-af18d8f3a01b
	I1205 20:14:27.943658   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:27.943673   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:27.943684   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:27.943693   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:27.943705   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:27 GMT
	I1205 20:14:27.943955   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:27.944489   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:27.944509   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:27.944516   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:27.944522   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:27.947913   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:27.947930   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:27.947937   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:27.947942   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:27 GMT
	I1205 20:14:27.947948   30150 round_trippers.go:580]     Audit-Id: 3e8b36cb-9160-4ccc-a10b-755717782c1c
	I1205 20:14:27.947952   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:27.947958   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:27.947963   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:27.948730   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:28.441420   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:28.441446   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:28.441470   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:28.441491   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:28.445246   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:28.445268   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:28.445275   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:28.445281   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:28 GMT
	I1205 20:14:28.445292   30150 round_trippers.go:580]     Audit-Id: d4ebea7b-83d8-4b5f-9abb-98fa7ad2a102
	I1205 20:14:28.445301   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:28.445309   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:28.445319   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:28.446002   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:28.446567   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:28.446600   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:28.446612   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:28.446621   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:28.449771   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:28.449798   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:28.449805   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:28.449810   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:28 GMT
	I1205 20:14:28.449815   30150 round_trippers.go:580]     Audit-Id: d12015fa-db76-43e8-9918-e0b994a51090
	I1205 20:14:28.449823   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:28.449831   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:28.449842   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:28.449986   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:28.450311   30150 pod_ready.go:102] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"False"
	I1205 20:14:28.940609   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:28.940632   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:28.940640   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:28.940646   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:28.944969   30150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:14:28.944989   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:28.944995   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:28.945001   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:28.945006   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:28 GMT
	I1205 20:14:28.945015   30150 round_trippers.go:580]     Audit-Id: 27fe4c58-cb79-4ff3-8197-0aa91a35af73
	I1205 20:14:28.945020   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:28.945025   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:28.945311   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:28.945799   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:28.945818   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:28.945827   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:28.945833   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:28.949482   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:28.949502   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:28.949511   30150 round_trippers.go:580]     Audit-Id: 4f08ad89-3734-473c-ad51-559e6a1536b0
	I1205 20:14:28.949519   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:28.949526   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:28.949534   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:28.949543   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:28.949554   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:28 GMT
	I1205 20:14:28.949776   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:29.441439   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:29.441463   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:29.441471   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:29.441477   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:29.444638   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:29.444667   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:29.444679   30150 round_trippers.go:580]     Audit-Id: 8023604a-e4df-4274-a0c4-e208c8e7c3cb
	I1205 20:14:29.444688   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:29.444694   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:29.444700   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:29.444707   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:29.444713   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:29 GMT
	I1205 20:14:29.444868   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:29.445331   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:29.445350   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:29.445358   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:29.445364   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:29.447804   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:29.447821   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:29.447828   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:29.447835   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:29.447840   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:29.447845   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:29 GMT
	I1205 20:14:29.447850   30150 round_trippers.go:580]     Audit-Id: 626316d6-aee6-44a9-a8da-a9509875888c
	I1205 20:14:29.447855   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:29.448001   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:29.941106   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:29.941132   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:29.941140   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:29.941146   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:29.944228   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:29.944253   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:29.944263   30150 round_trippers.go:580]     Audit-Id: 91fd3faa-1509-4c03-8dce-04866ec80681
	I1205 20:14:29.944283   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:29.944293   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:29.944302   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:29.944314   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:29.944333   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:29 GMT
	I1205 20:14:29.945001   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:29.945477   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:29.945491   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:29.945499   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:29.945505   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:29.948052   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:29.948077   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:29.948085   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:29.948094   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:29.948101   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:29.948108   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:29.948120   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:29 GMT
	I1205 20:14:29.948128   30150 round_trippers.go:580]     Audit-Id: 71160070-7eeb-4053-aff6-ae0d0706611a
	I1205 20:14:29.948271   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:30.440907   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:30.440935   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:30.440947   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:30.440956   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:30.445951   30150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:14:30.445973   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:30.445980   30150 round_trippers.go:580]     Audit-Id: 0638d1b7-6d9e-493c-83d0-bdd0647f0da8
	I1205 20:14:30.445986   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:30.445993   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:30.446001   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:30.446009   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:30.446017   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:30 GMT
	I1205 20:14:30.446689   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:30.447123   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:30.447139   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:30.447149   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:30.447158   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:30.451119   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:30.451138   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:30.451147   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:30.451155   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:30 GMT
	I1205 20:14:30.451163   30150 round_trippers.go:580]     Audit-Id: 5b3910d9-7030-4d23-84f2-dcf5a9801f53
	I1205 20:14:30.451172   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:30.451185   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:30.451197   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:30.451761   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:30.452075   30150 pod_ready.go:102] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"False"
	I1205 20:14:30.941371   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:30.941392   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:30.941400   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:30.941406   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:30.944945   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:30.944965   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:30.944973   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:30 GMT
	I1205 20:14:30.944979   30150 round_trippers.go:580]     Audit-Id: 5a2f81b4-7890-43ce-8312-610808abd7c8
	I1205 20:14:30.945000   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:30.945007   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:30.945015   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:30.945022   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:30.945535   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"774","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6368 chars]
	I1205 20:14:30.946010   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:30.946029   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:30.946039   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:30.946047   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:30.948280   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:30.948296   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:30.948302   30150 round_trippers.go:580]     Audit-Id: 19c8c42f-1ea6-4b70-9b04-faaa98a737f2
	I1205 20:14:30.948307   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:30.948312   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:30.948317   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:30.948322   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:30.948338   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:30 GMT
	I1205 20:14:30.950483   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:31.440585   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:31.440613   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.440624   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.440632   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.447745   30150 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:14:31.447774   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.447784   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.447793   30150 round_trippers.go:580]     Audit-Id: 3a9ffcca-30dc-4151-afdc-ba5c075669bf
	I1205 20:14:31.447800   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.447808   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.447816   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.447824   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.447985   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"902","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1205 20:14:31.448533   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:31.448555   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.448566   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.448575   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.456533   30150 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:14:31.456562   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.456573   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.456579   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.456585   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.456590   30150 round_trippers.go:580]     Audit-Id: 0f044459-9abe-49e3-8748-814ba6a5cbfa
	I1205 20:14:31.456595   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.456600   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.456810   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:31.940838   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:14:31.940868   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.940880   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.940889   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.944106   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:31.944128   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.944139   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.944148   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.944156   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.944165   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.944173   30150 round_trippers.go:580]     Audit-Id: 6dbe500c-a601-4d1a-a1ff-d24486ec97f0
	I1205 20:14:31.944181   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.944768   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:14:31.945188   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:31.945200   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.945207   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.945213   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.947217   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.947237   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.947243   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.947248   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.947253   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.947258   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.947263   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.947268   30150 round_trippers.go:580]     Audit-Id: ce145b6a-cfe2-4d8e-ae73-ac03e1f3f13e
	I1205 20:14:31.947445   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:31.947713   30150 pod_ready.go:92] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:31.947726   30150 pod_ready.go:81] duration metric: took 7.518560582s waiting for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.947734   30150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.947772   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-558947
	I1205 20:14:31.947780   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.947788   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.947794   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.949673   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.949693   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.949699   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.949704   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.949711   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.949720   30150 round_trippers.go:580]     Audit-Id: d607394c-8a6a-4065-9536-8d7d8febf653
	I1205 20:14:31.949729   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.949742   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.949868   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"895","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:14:31.950227   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:31.950241   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.950247   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.950253   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.954246   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:31.954264   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.954289   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.954299   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.954308   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.954314   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.954319   30150 round_trippers.go:580]     Audit-Id: 7512fa58-b7b0-42c8-a9fb-52768ea530fc
	I1205 20:14:31.954324   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.954493   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:31.954794   30150 pod_ready.go:92] pod "etcd-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:31.954812   30150 pod_ready.go:81] duration metric: took 7.072594ms waiting for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.954832   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.954890   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-558947
	I1205 20:14:31.954901   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.954911   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.954923   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.957015   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:31.957027   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.957036   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.957041   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.957049   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.957057   30150 round_trippers.go:580]     Audit-Id: eb20ac9a-6689-42fc-a856-400adafb6e57
	I1205 20:14:31.957069   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.957080   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.957245   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-558947","namespace":"kube-system","uid":"36300192-b165-4bee-b791-9fce329428f9","resourceVersion":"871","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.3:8443","kubernetes.io/config.hash":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.mirror":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.seen":"2023-12-05T20:03:56.146037812Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7371 chars]
	I1205 20:14:31.957586   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:31.957599   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.957606   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.957612   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.959267   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.959280   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.959286   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.959291   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.959296   30150 round_trippers.go:580]     Audit-Id: 244565b5-952c-4a0f-9695-e32f15be1042
	I1205 20:14:31.959301   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.959308   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.959316   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.959540   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:31.959805   30150 pod_ready.go:92] pod "kube-apiserver-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:31.959818   30150 pod_ready.go:81] duration metric: took 4.975895ms waiting for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.959826   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.959866   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-558947
	I1205 20:14:31.959873   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.959879   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.959886   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.961812   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.961828   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.961834   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.961840   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.961845   30150 round_trippers.go:580]     Audit-Id: c9de0bc0-e18b-4763-b371-ac239967eeed
	I1205 20:14:31.961850   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.961857   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.961862   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.962136   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-558947","namespace":"kube-system","uid":"49ee6fa8-b7cd-4880-b4db-a1717b685750","resourceVersion":"883","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.mirror":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.seen":"2023-12-05T20:03:56.146038937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6946 chars]
	I1205 20:14:31.962626   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:31.962650   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.962661   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.962671   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.964596   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.964611   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.964617   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.964622   30150 round_trippers.go:580]     Audit-Id: 35c1c96b-a0ed-45bb-85d2-e431b46273ca
	I1205 20:14:31.964628   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.964634   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.964639   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.964644   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.964875   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:31.965168   30150 pod_ready.go:92] pod "kube-controller-manager-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:31.965181   30150 pod_ready.go:81] duration metric: took 5.350378ms waiting for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.965190   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.965250   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:14:31.965261   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.965271   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.965280   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.967050   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.967067   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.967077   30150 round_trippers.go:580]     Audit-Id: 2a3b1b23-f4f3-484d-9350-cd08f44d9e02
	I1205 20:14:31.967086   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.967094   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.967101   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.967106   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.967112   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.967250   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"517","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1205 20:14:31.967609   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:14:31.967622   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:31.967630   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:31.967636   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:31.969564   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:31.969581   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:31.969589   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:31.969597   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:31 GMT
	I1205 20:14:31.969605   30150 round_trippers.go:580]     Audit-Id: 44601637-169d-4081-8746-bc142b757825
	I1205 20:14:31.969613   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:31.969620   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:31.969627   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:31.969822   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa","resourceVersion":"751","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_06_21_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I1205 20:14:31.970122   30150 pod_ready.go:92] pod "kube-proxy-kjph8" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:31.970138   30150 pod_ready.go:81] duration metric: took 4.941431ms waiting for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:31.970147   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:32.141515   30150 request.go:629] Waited for 171.317277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:14:32.141569   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:14:32.141574   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:32.141594   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:32.141605   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:32.144916   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:32.144936   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:32.144944   30150 round_trippers.go:580]     Audit-Id: ca29af4f-de60-4909-a24c-22166d4f5bef
	I1205 20:14:32.144953   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:32.144961   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:32.144970   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:32.144978   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:32.144990   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:32 GMT
	I1205 20:14:32.145156   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgmt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"41275cfd-cb0f-4886-b1bc-a86b7e20cc14","resourceVersion":"783","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:14:32.340850   30150 request.go:629] Waited for 195.3205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:32.340942   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:32.340951   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:32.340959   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:32.340966   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:32.345175   30150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:14:32.345200   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:32.345209   30150 round_trippers.go:580]     Audit-Id: 5d14b52d-4f59-4e2e-8c21-e783dae0f519
	I1205 20:14:32.345217   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:32.345225   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:32.345236   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:32.345244   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:32.345251   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:32 GMT
	I1205 20:14:32.345996   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:32.346406   30150 pod_ready.go:92] pod "kube-proxy-mgmt2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:32.346427   30150 pod_ready.go:81] duration metric: took 376.272096ms waiting for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:32.346439   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:32.541838   30150 request.go:629] Waited for 195.344386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:14:32.541903   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:14:32.541908   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:32.541916   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:32.541928   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:32.545158   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:32.545182   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:32.545192   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:32.545200   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:32 GMT
	I1205 20:14:32.545207   30150 round_trippers.go:580]     Audit-Id: 19fb1a7f-1095-4e08-af85-747a7a25d258
	I1205 20:14:32.545215   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:32.545223   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:32.545231   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:32.545489   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xvjj7","generateName":"kube-proxy-","namespace":"kube-system","uid":"19641919-0011-4726-b884-cc468d0f2dd0","resourceVersion":"724","creationTimestamp":"2023-12-05T20:05:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1205 20:14:32.741343   30150 request.go:629] Waited for 195.432857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:14:32.741397   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:14:32.741402   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:32.741409   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:32.741415   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:32.745071   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:32.745095   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:32.745103   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:32.745111   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:32.745120   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:32.745131   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:32.745141   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:32 GMT
	I1205 20:14:32.745151   30150 round_trippers.go:580]     Audit-Id: 725160c9-8a50-4ace-87f9-238e04f60fa2
	I1205 20:14:32.745310   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m03","uid":"b3bc91db-0091-4e00-86c1-c071017fca0a","resourceVersion":"744","creationTimestamp":"2023-12-05T20:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_06_21_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I1205 20:14:32.745612   30150 pod_ready.go:92] pod "kube-proxy-xvjj7" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:32.745628   30150 pod_ready.go:81] duration metric: took 399.183343ms waiting for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:32.745637   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:32.940996   30150 request.go:629] Waited for 195.307602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:14:32.941074   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:14:32.941081   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:32.941091   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:32.941104   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:32.944781   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:32.944807   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:32.944817   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:32.944825   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:32 GMT
	I1205 20:14:32.944833   30150 round_trippers.go:580]     Audit-Id: 37298428-48cc-4480-b9ce-849f08370bb8
	I1205 20:14:32.944850   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:32.944858   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:32.944869   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:32.945468   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-558947","namespace":"kube-system","uid":"526e311f-432f-4c9a-ad6e-19855cae55be","resourceVersion":"897","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.mirror":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.seen":"2023-12-05T20:03:56.146039635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:14:33.141278   30150 request.go:629] Waited for 195.377958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:33.141356   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:14:33.141375   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:33.141433   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:33.141450   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:33.144406   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:14:33.144437   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:33.144446   30150 round_trippers.go:580]     Audit-Id: 516918c0-1bfd-4cf8-9ad9-74bd826fb91f
	I1205 20:14:33.144453   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:33.144460   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:33.144466   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:33.144473   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:33.144480   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:33 GMT
	I1205 20:14:33.144843   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 5940 chars]
	I1205 20:14:33.145159   30150 pod_ready.go:92] pod "kube-scheduler-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:14:33.145190   30150 pod_ready.go:81] duration metric: took 399.546104ms waiting for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:14:33.145204   30150 pod_ready.go:38] duration metric: took 8.723356128s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:14:33.145220   30150 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:14:33.145267   30150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:14:33.163985   30150 command_runner.go:130] > 1129
	I1205 20:14:33.164037   30150 api_server.go:72] duration metric: took 14.600096065s to wait for apiserver process to appear ...
	I1205 20:14:33.164051   30150 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:14:33.164069   30150 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:14:33.169083   30150 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I1205 20:14:33.169147   30150 round_trippers.go:463] GET https://192.168.39.3:8443/version
	I1205 20:14:33.169154   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:33.169162   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:33.169170   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:33.170304   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:14:33.170327   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:33.170337   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:33.170345   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:33.170367   30150 round_trippers.go:580]     Content-Length: 264
	I1205 20:14:33.170381   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:33 GMT
	I1205 20:14:33.170389   30150 round_trippers.go:580]     Audit-Id: 1dad2cdb-b981-42bb-b9cc-c8dbd6699601
	I1205 20:14:33.170396   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:33.170404   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:33.170486   30150 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1205 20:14:33.170530   30150 api_server.go:141] control plane version: v1.28.4
	I1205 20:14:33.170543   30150 api_server.go:131] duration metric: took 6.486836ms to wait for apiserver health ...
	I1205 20:14:33.170550   30150 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:14:33.340912   30150 request.go:629] Waited for 170.298053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:33.340976   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:33.340982   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:33.341010   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:33.341024   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:33.345826   30150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:14:33.345856   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:33.345866   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:33.345874   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:33.345895   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:33.345904   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:33 GMT
	I1205 20:14:33.345912   30150 round_trippers.go:580]     Audit-Id: 8c2294b7-cbd1-4eb5-b84e-5e107c302e4b
	I1205 20:14:33.345919   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:33.346994   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81798 chars]
	I1205 20:14:33.349600   30150 system_pods.go:59] 12 kube-system pods found
	I1205 20:14:33.349625   30150 system_pods.go:61] "coredns-5dd5756b68-knl4d" [28d6c367-593c-469a-90c6-b3c13cedc3df] Running
	I1205 20:14:33.349630   30150 system_pods.go:61] "etcd-multinode-558947" [118e2032-1898-42c0-9aa2-3f15356e9ff3] Running
	I1205 20:14:33.349636   30150 system_pods.go:61] "kindnet-7dnjd" [f957ff7c-baef-49a4-83cb-db708a3f1017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 20:14:33.349641   30150 system_pods.go:61] "kindnet-cv76g" [88acd23e-99f5-4c5f-a03c-1c961a511eac] Running
	I1205 20:14:33.349648   30150 system_pods.go:61] "kindnet-xcs7j" [c86c9a0d-7018-41d4-9bf2-60262f1a66e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 20:14:33.349655   30150 system_pods.go:61] "kube-apiserver-multinode-558947" [36300192-b165-4bee-b791-9fce329428f9] Running
	I1205 20:14:33.349660   30150 system_pods.go:61] "kube-controller-manager-multinode-558947" [49ee6fa8-b7cd-4880-b4db-a1717b685750] Running
	I1205 20:14:33.349664   30150 system_pods.go:61] "kube-proxy-kjph8" [05167608-ef4c-4bac-b57b-0330ab4cef76] Running
	I1205 20:14:33.349668   30150 system_pods.go:61] "kube-proxy-mgmt2" [41275cfd-cb0f-4886-b1bc-a86b7e20cc14] Running
	I1205 20:14:33.349671   30150 system_pods.go:61] "kube-proxy-xvjj7" [19641919-0011-4726-b884-cc468d0f2dd0] Running
	I1205 20:14:33.349675   30150 system_pods.go:61] "kube-scheduler-multinode-558947" [526e311f-432f-4c9a-ad6e-19855cae55be] Running
	I1205 20:14:33.349679   30150 system_pods.go:61] "storage-provisioner" [58d4c242-7ea5-49f5-999c-3c9135144038] Running
	I1205 20:14:33.349685   30150 system_pods.go:74] duration metric: took 179.126813ms to wait for pod list to return data ...
	I1205 20:14:33.349691   30150 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:14:33.541130   30150 request.go:629] Waited for 191.365655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:14:33.541193   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:14:33.541198   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:33.541210   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:33.541219   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:33.545049   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:33.545072   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:33.545081   30150 round_trippers.go:580]     Audit-Id: 3791ed46-3d9b-4933-a825-f021b80085c6
	I1205 20:14:33.545089   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:33.545098   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:33.545103   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:33.545108   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:33.545114   30150 round_trippers.go:580]     Content-Length: 261
	I1205 20:14:33.545119   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:33 GMT
	I1205 20:14:33.545140   30150 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a86eaa39-f2bf-4545-87d3-0c9eefaad8ac","resourceVersion":"341","creationTimestamp":"2023-12-05T20:04:08Z"}}]}
	I1205 20:14:33.545329   30150 default_sa.go:45] found service account: "default"
	I1205 20:14:33.545351   30150 default_sa.go:55] duration metric: took 195.650013ms for default service account to be created ...
	I1205 20:14:33.545358   30150 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:14:33.741773   30150 request.go:629] Waited for 196.357902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:33.741841   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:14:33.741848   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:33.741858   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:33.741866   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:33.746815   30150 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:14:33.746834   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:33.746842   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:33.746851   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:33 GMT
	I1205 20:14:33.746859   30150 round_trippers.go:580]     Audit-Id: aa7449c7-8723-4731-a52a-68719934147a
	I1205 20:14:33.746865   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:33.746873   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:33.746881   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:33.748409   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81798 chars]
	I1205 20:14:33.750824   30150 system_pods.go:86] 12 kube-system pods found
	I1205 20:14:33.750848   30150 system_pods.go:89] "coredns-5dd5756b68-knl4d" [28d6c367-593c-469a-90c6-b3c13cedc3df] Running
	I1205 20:14:33.750853   30150 system_pods.go:89] "etcd-multinode-558947" [118e2032-1898-42c0-9aa2-3f15356e9ff3] Running
	I1205 20:14:33.750861   30150 system_pods.go:89] "kindnet-7dnjd" [f957ff7c-baef-49a4-83cb-db708a3f1017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 20:14:33.750869   30150 system_pods.go:89] "kindnet-cv76g" [88acd23e-99f5-4c5f-a03c-1c961a511eac] Running
	I1205 20:14:33.750879   30150 system_pods.go:89] "kindnet-xcs7j" [c86c9a0d-7018-41d4-9bf2-60262f1a66e6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1205 20:14:33.750893   30150 system_pods.go:89] "kube-apiserver-multinode-558947" [36300192-b165-4bee-b791-9fce329428f9] Running
	I1205 20:14:33.750901   30150 system_pods.go:89] "kube-controller-manager-multinode-558947" [49ee6fa8-b7cd-4880-b4db-a1717b685750] Running
	I1205 20:14:33.750909   30150 system_pods.go:89] "kube-proxy-kjph8" [05167608-ef4c-4bac-b57b-0330ab4cef76] Running
	I1205 20:14:33.750922   30150 system_pods.go:89] "kube-proxy-mgmt2" [41275cfd-cb0f-4886-b1bc-a86b7e20cc14] Running
	I1205 20:14:33.750929   30150 system_pods.go:89] "kube-proxy-xvjj7" [19641919-0011-4726-b884-cc468d0f2dd0] Running
	I1205 20:14:33.750933   30150 system_pods.go:89] "kube-scheduler-multinode-558947" [526e311f-432f-4c9a-ad6e-19855cae55be] Running
	I1205 20:14:33.750938   30150 system_pods.go:89] "storage-provisioner" [58d4c242-7ea5-49f5-999c-3c9135144038] Running
	I1205 20:14:33.750943   30150 system_pods.go:126] duration metric: took 205.581508ms to wait for k8s-apps to be running ...
	I1205 20:14:33.750952   30150 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:14:33.751004   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:14:33.765142   30150 system_svc.go:56] duration metric: took 14.180468ms WaitForService to wait for kubelet.
	I1205 20:14:33.765169   30150 kubeadm.go:581] duration metric: took 15.20122995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:14:33.765184   30150 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:14:33.941616   30150 request.go:629] Waited for 176.365118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I1205 20:14:33.941667   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I1205 20:14:33.941683   30150 round_trippers.go:469] Request Headers:
	I1205 20:14:33.941691   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:14:33.941697   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:14:33.944748   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:14:33.944766   30150 round_trippers.go:577] Response Headers:
	I1205 20:14:33.944772   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:14:33.944777   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:14:33.944783   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:14:33 GMT
	I1205 20:14:33.944820   30150 round_trippers.go:580]     Audit-Id: 47d547f1-9c16-43e5-b5e6-d79065fcdf13
	I1205 20:14:33.944834   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:14:33.944842   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:14:33.945140   30150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"915"},"items":[{"metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"876","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16177 chars]
	I1205 20:14:33.945695   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:14:33.945711   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:14:33.945721   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:14:33.945725   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:14:33.945729   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:14:33.945732   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:14:33.945736   30150 node_conditions.go:105] duration metric: took 180.54767ms to run NodePressure ...
	I1205 20:14:33.945746   30150 start.go:228] waiting for startup goroutines ...
	I1205 20:14:33.945753   30150 start.go:233] waiting for cluster config update ...
	I1205 20:14:33.945759   30150 start.go:242] writing updated cluster config ...
	I1205 20:14:33.946224   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:14:33.946351   30150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:14:33.949382   30150 out.go:177] * Starting worker node multinode-558947-m02 in cluster multinode-558947
	I1205 20:14:33.950426   30150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:14:33.950445   30150 cache.go:56] Caching tarball of preloaded images
	I1205 20:14:33.950551   30150 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:14:33.950568   30150 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:14:33.950661   30150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:14:33.950865   30150 start.go:365] acquiring machines lock for multinode-558947-m02: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:14:33.950911   30150 start.go:369] acquired machines lock for "multinode-558947-m02" in 25.372µs
	I1205 20:14:33.950932   30150 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:14:33.950941   30150 fix.go:54] fixHost starting: m02
	I1205 20:14:33.951234   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:14:33.951277   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:14:33.965217   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I1205 20:14:33.965609   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:14:33.965977   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:14:33.966001   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:14:33.966390   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:14:33.966564   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:14:33.966721   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetState
	I1205 20:14:33.968302   30150 fix.go:102] recreateIfNeeded on multinode-558947-m02: state=Running err=<nil>
	W1205 20:14:33.968319   30150 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:14:33.969908   30150 out.go:177] * Updating the running kvm2 "multinode-558947-m02" VM ...
	I1205 20:14:33.971096   30150 machine.go:88] provisioning docker machine ...
	I1205 20:14:33.971115   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:14:33.971305   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:14:33.971451   30150 buildroot.go:166] provisioning hostname "multinode-558947-m02"
	I1205 20:14:33.971472   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:14:33.971584   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:14:33.973864   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:33.974247   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:14:33.974292   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:33.974425   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:14:33.974587   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:33.974734   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:33.974858   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:14:33.974997   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:33.975313   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:14:33.975331   30150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-558947-m02 && echo "multinode-558947-m02" | sudo tee /etc/hostname
	I1205 20:14:34.113265   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-558947-m02
	
	I1205 20:14:34.113308   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:14:34.115884   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.116273   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:14:34.116329   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.116476   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:14:34.116667   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:34.116835   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:34.116975   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:14:34.117146   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:34.117488   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:14:34.117513   30150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-558947-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-558947-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-558947-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:14:34.235178   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:14:34.235211   30150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:14:34.235229   30150 buildroot.go:174] setting up certificates
	I1205 20:14:34.235242   30150 provision.go:83] configureAuth start
	I1205 20:14:34.235254   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetMachineName
	I1205 20:14:34.235519   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:14:34.238252   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.238665   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:14:34.238684   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.238860   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:14:34.241229   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.241542   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:14:34.241576   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.241682   30150 provision.go:138] copyHostCerts
	I1205 20:14:34.241709   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:14:34.241737   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:14:34.241747   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:14:34.241807   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:14:34.241878   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:14:34.241895   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:14:34.241899   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:14:34.241921   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:14:34.241963   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:14:34.241979   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:14:34.241985   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:14:34.242003   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:14:34.242044   30150 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.multinode-558947-m02 san=[192.168.39.10 192.168.39.10 localhost 127.0.0.1 minikube multinode-558947-m02]
	I1205 20:14:34.450031   30150 provision.go:172] copyRemoteCerts
	I1205 20:14:34.450081   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:14:34.450106   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:14:34.452781   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.453137   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:14:34.453164   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.453324   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:14:34.453491   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:34.453677   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:14:34.453826   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:14:34.544977   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:14:34.545044   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:14:34.569582   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:14:34.569661   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1205 20:14:34.592046   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:14:34.592125   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:14:34.614453   30150 provision.go:86] duration metric: configureAuth took 379.19865ms
	I1205 20:14:34.614484   30150 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:14:34.614693   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:14:34.614759   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:14:34.617443   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.617842   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:14:34.617867   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:14:34.618058   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:14:34.618311   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:34.618481   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:14:34.618625   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:14:34.618800   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:14:34.619095   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:14:34.619110   30150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:16:05.240359   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:16:05.240393   30150 machine.go:91] provisioned docker machine in 1m31.269285904s
	I1205 20:16:05.240410   30150 start.go:300] post-start starting for "multinode-558947-m02" (driver="kvm2")
	I1205 20:16:05.240421   30150 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:16:05.240438   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:16:05.240793   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:16:05.240827   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:16:05.243784   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.244212   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:16:05.244252   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.244393   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:16:05.244593   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:16:05.244792   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:16:05.244946   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:16:05.335949   30150 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:16:05.340485   30150 command_runner.go:130] > NAME=Buildroot
	I1205 20:16:05.340508   30150 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1205 20:16:05.340512   30150 command_runner.go:130] > ID=buildroot
	I1205 20:16:05.340517   30150 command_runner.go:130] > VERSION_ID=2021.02.12
	I1205 20:16:05.340522   30150 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1205 20:16:05.340634   30150 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:16:05.340658   30150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:16:05.340748   30150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:16:05.340845   30150 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:16:05.340857   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /etc/ssl/certs/134102.pem
	I1205 20:16:05.340957   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:16:05.349249   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:16:05.373666   30150 start.go:303] post-start completed in 133.24263ms
	I1205 20:16:05.373690   30150 fix.go:56] fixHost completed within 1m31.422750526s
	I1205 20:16:05.373710   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:16:05.376964   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.377358   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:16:05.377386   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.377603   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:16:05.377835   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:16:05.377996   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:16:05.378144   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:16:05.378367   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:16:05.378798   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1205 20:16:05.378814   30150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:16:05.503247   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701807365.495209304
	
	I1205 20:16:05.503280   30150 fix.go:206] guest clock: 1701807365.495209304
	I1205 20:16:05.503287   30150 fix.go:219] Guest: 2023-12-05 20:16:05.495209304 +0000 UTC Remote: 2023-12-05 20:16:05.373694156 +0000 UTC m=+453.836252956 (delta=121.515148ms)
	I1205 20:16:05.503303   30150 fix.go:190] guest clock delta is within tolerance: 121.515148ms
	I1205 20:16:05.503308   30150 start.go:83] releasing machines lock for "multinode-558947-m02", held for 1m31.552385499s
	I1205 20:16:05.503328   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:16:05.503614   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:16:05.506510   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.506862   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:16:05.506891   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.509109   30150 out.go:177] * Found network options:
	I1205 20:16:05.510847   30150 out.go:177]   - NO_PROXY=192.168.39.3
	W1205 20:16:05.512371   30150 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:16:05.512413   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:16:05.513014   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:16:05.513219   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:16:05.513315   30150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:16:05.513353   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	W1205 20:16:05.513449   30150 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:16:05.513535   30150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:16:05.513558   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:16:05.516149   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.516554   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:16:05.516590   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.516616   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.516804   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:16:05.517014   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:16:05.517128   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:16:05.517154   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:05.517159   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:16:05.517311   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:16:05.517322   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:16:05.517512   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:16:05.517682   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:16:05.517822   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:16:05.630582   30150 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:16:05.757119   30150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:16:05.763357   30150 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:16:05.763865   30150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:16:05.763921   30150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:16:05.772215   30150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:16:05.772231   30150 start.go:475] detecting cgroup driver to use...
	I1205 20:16:05.772290   30150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:16:05.785831   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:16:05.798604   30150 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:16:05.798683   30150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:16:05.811677   30150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:16:05.824481   30150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:16:05.960737   30150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:16:06.104819   30150 docker.go:219] disabling docker service ...
	I1205 20:16:06.104886   30150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:16:06.120725   30150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:16:06.133025   30150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:16:06.263954   30150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:16:06.396557   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:16:06.411943   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:16:06.428951   30150 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:16:06.429400   30150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:16:06.429454   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:16:06.438741   30150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:16:06.438824   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:16:06.449051   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:16:06.458960   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:16:06.468420   30150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:16:06.477992   30150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:16:06.486016   30150 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:16:06.486163   30150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:16:06.494226   30150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:16:06.613231   30150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:16:09.428726   30150 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.81544786s)
	I1205 20:16:09.428762   30150 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:16:09.428820   30150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:16:09.435987   30150 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:16:09.436013   30150 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:16:09.436023   30150 command_runner.go:130] > Device: 16h/22d	Inode: 1190        Links: 1
	I1205 20:16:09.436034   30150 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:16:09.436041   30150 command_runner.go:130] > Access: 2023-12-05 20:16:09.351541823 +0000
	I1205 20:16:09.436051   30150 command_runner.go:130] > Modify: 2023-12-05 20:16:09.351541823 +0000
	I1205 20:16:09.436058   30150 command_runner.go:130] > Change: 2023-12-05 20:16:09.351541823 +0000
	I1205 20:16:09.436066   30150 command_runner.go:130] >  Birth: -
	I1205 20:16:09.436543   30150 start.go:543] Will wait 60s for crictl version
	I1205 20:16:09.436604   30150 ssh_runner.go:195] Run: which crictl
	I1205 20:16:09.440714   30150 command_runner.go:130] > /usr/bin/crictl
	I1205 20:16:09.440807   30150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:16:09.481202   30150 command_runner.go:130] > Version:  0.1.0
	I1205 20:16:09.481223   30150 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:16:09.481227   30150 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1205 20:16:09.481238   30150 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:16:09.481256   30150 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:16:09.481329   30150 ssh_runner.go:195] Run: crio --version
	I1205 20:16:09.535202   30150 command_runner.go:130] > crio version 1.24.1
	I1205 20:16:09.535223   30150 command_runner.go:130] > Version:          1.24.1
	I1205 20:16:09.535230   30150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:16:09.535239   30150 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:16:09.535245   30150 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:16:09.535250   30150 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:16:09.535254   30150 command_runner.go:130] > Compiler:         gc
	I1205 20:16:09.535258   30150 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:16:09.535264   30150 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:16:09.535270   30150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:16:09.535278   30150 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:16:09.535282   30150 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:16:09.535343   30150 ssh_runner.go:195] Run: crio --version
	I1205 20:16:09.581113   30150 command_runner.go:130] > crio version 1.24.1
	I1205 20:16:09.581141   30150 command_runner.go:130] > Version:          1.24.1
	I1205 20:16:09.581154   30150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:16:09.581161   30150 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:16:09.581170   30150 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:16:09.581178   30150 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:16:09.581184   30150 command_runner.go:130] > Compiler:         gc
	I1205 20:16:09.581192   30150 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:16:09.581200   30150 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:16:09.581213   30150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:16:09.581225   30150 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:16:09.581231   30150 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:16:09.583113   30150 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:16:09.584450   30150 out.go:177]   - env NO_PROXY=192.168.39.3
	I1205 20:16:09.585667   30150 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:16:09.588029   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:09.588362   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:16:09.588393   30150 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:16:09.588549   30150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:16:09.592842   30150 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 20:16:09.593122   30150 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947 for IP: 192.168.39.10
	I1205 20:16:09.593147   30150 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:16:09.593309   30150 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:16:09.593365   30150 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:16:09.593384   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:16:09.593411   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:16:09.593429   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:16:09.593451   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:16:09.593520   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:16:09.593564   30150 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:16:09.593590   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:16:09.593630   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:16:09.593672   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:16:09.593703   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:16:09.593770   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:16:09.593809   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem -> /usr/share/ca-certificates/13410.pem
	I1205 20:16:09.593832   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /usr/share/ca-certificates/134102.pem
	I1205 20:16:09.593853   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:16:09.594434   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:16:09.617290   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:16:09.640199   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:16:09.663284   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:16:09.686502   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:16:09.708829   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:16:09.732201   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:16:09.754016   30150 ssh_runner.go:195] Run: openssl version
	I1205 20:16:09.760091   30150 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1205 20:16:09.760307   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:16:09.771092   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:16:09.775529   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:16:09.775567   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:16:09.775611   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:16:09.781056   30150 command_runner.go:130] > 3ec20f2e
	I1205 20:16:09.781124   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:16:09.790335   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:16:09.801041   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:16:09.806103   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:16:09.806201   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:16:09.806263   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:16:09.812551   30150 command_runner.go:130] > b5213941
	I1205 20:16:09.812795   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:16:09.822774   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:16:09.834108   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:16:09.838836   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:16:09.839099   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:16:09.839153   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:16:09.844999   30150 command_runner.go:130] > 51391683
	I1205 20:16:09.845069   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:16:09.855402   30150 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:16:09.861529   30150 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:16:09.861578   30150 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:16:09.861696   30150 ssh_runner.go:195] Run: crio config
	I1205 20:16:09.925482   30150 command_runner.go:130] ! time="2023-12-05 20:16:09.917519765Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1205 20:16:09.925508   30150 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:16:09.940167   30150 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:16:09.940222   30150 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:16:09.940231   30150 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:16:09.940235   30150 command_runner.go:130] > #
	I1205 20:16:09.940246   30150 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:16:09.940255   30150 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:16:09.940265   30150 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:16:09.940281   30150 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:16:09.940290   30150 command_runner.go:130] > # reload'.
	I1205 20:16:09.940302   30150 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:16:09.940316   30150 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:16:09.940331   30150 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:16:09.940344   30150 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:16:09.940354   30150 command_runner.go:130] > [crio]
	I1205 20:16:09.940366   30150 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:16:09.940376   30150 command_runner.go:130] > # containers images, in this directory.
	I1205 20:16:09.940392   30150 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:16:09.940411   30150 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:16:09.940423   30150 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:16:09.940437   30150 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:16:09.940451   30150 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:16:09.940461   30150 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:16:09.940472   30150 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:16:09.940485   30150 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:16:09.940495   30150 command_runner.go:130] > storage_option = [
	I1205 20:16:09.940505   30150 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:16:09.940514   30150 command_runner.go:130] > ]
	I1205 20:16:09.940526   30150 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:16:09.940540   30150 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:16:09.940552   30150 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:16:09.940562   30150 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:16:09.940576   30150 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:16:09.940587   30150 command_runner.go:130] > # always happen on a node reboot
	I1205 20:16:09.940599   30150 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:16:09.940613   30150 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:16:09.940627   30150 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:16:09.940643   30150 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:16:09.940655   30150 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:16:09.940671   30150 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:16:09.940687   30150 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:16:09.940698   30150 command_runner.go:130] > # internal_wipe = true
	I1205 20:16:09.940712   30150 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:16:09.940726   30150 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:16:09.940740   30150 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:16:09.940753   30150 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:16:09.940766   30150 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:16:09.940785   30150 command_runner.go:130] > [crio.api]
	I1205 20:16:09.940797   30150 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:16:09.940809   30150 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:16:09.940822   30150 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:16:09.940841   30150 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:16:09.940856   30150 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:16:09.940870   30150 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:16:09.940880   30150 command_runner.go:130] > # stream_port = "0"
	I1205 20:16:09.940896   30150 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:16:09.940906   30150 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:16:09.940917   30150 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:16:09.940928   30150 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:16:09.940940   30150 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:16:09.940955   30150 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:16:09.940964   30150 command_runner.go:130] > # minutes.
	I1205 20:16:09.940974   30150 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:16:09.940988   30150 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:16:09.941002   30150 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:16:09.941012   30150 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:16:09.941023   30150 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:16:09.941037   30150 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:16:09.941050   30150 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:16:09.941060   30150 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:16:09.941074   30150 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:16:09.941085   30150 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:16:09.941098   30150 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:16:09.941109   30150 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:16:09.941135   30150 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:16:09.941148   30150 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:16:09.941159   30150 command_runner.go:130] > [crio.runtime]
	I1205 20:16:09.941172   30150 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:16:09.941185   30150 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:16:09.941195   30150 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:16:09.941206   30150 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:16:09.941217   30150 command_runner.go:130] > # default_ulimits = [
	I1205 20:16:09.941226   30150 command_runner.go:130] > # ]
	I1205 20:16:09.941237   30150 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:16:09.941250   30150 command_runner.go:130] > # no_pivot = false
	I1205 20:16:09.941264   30150 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:16:09.941278   30150 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:16:09.941287   30150 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:16:09.941298   30150 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:16:09.941314   30150 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:16:09.941329   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:16:09.941341   30150 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:16:09.941352   30150 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:16:09.941367   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:16:09.941377   30150 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:16:09.941388   30150 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:16:09.941401   30150 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:16:09.941416   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:16:09.941426   30150 command_runner.go:130] > conmon_env = [
	I1205 20:16:09.941440   30150 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:16:09.941449   30150 command_runner.go:130] > ]
	I1205 20:16:09.941459   30150 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:16:09.941470   30150 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:16:09.941481   30150 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:16:09.941491   30150 command_runner.go:130] > # default_env = [
	I1205 20:16:09.941501   30150 command_runner.go:130] > # ]
	I1205 20:16:09.941512   30150 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:16:09.941522   30150 command_runner.go:130] > # selinux = false
	I1205 20:16:09.941537   30150 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:16:09.941551   30150 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:16:09.941564   30150 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:16:09.941572   30150 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:16:09.941586   30150 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:16:09.941600   30150 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:16:09.941614   30150 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:16:09.941625   30150 command_runner.go:130] > # which might increase security.
	I1205 20:16:09.941637   30150 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:16:09.941652   30150 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:16:09.941666   30150 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:16:09.941680   30150 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:16:09.941695   30150 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:16:09.941708   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:16:09.941719   30150 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:16:09.941730   30150 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:16:09.941741   30150 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:16:09.941753   30150 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:16:09.941766   30150 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:16:09.941780   30150 command_runner.go:130] > # irqbalance daemon.
	I1205 20:16:09.941792   30150 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:16:09.941805   30150 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:16:09.941815   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:16:09.941826   30150 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:16:09.941836   30150 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:16:09.941847   30150 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:16:09.941858   30150 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:16:09.941869   30150 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:16:09.941881   30150 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:16:09.941896   30150 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:16:09.941906   30150 command_runner.go:130] > # will be added.
	I1205 20:16:09.941917   30150 command_runner.go:130] > # default_capabilities = [
	I1205 20:16:09.941926   30150 command_runner.go:130] > # 	"CHOWN",
	I1205 20:16:09.941936   30150 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:16:09.941944   30150 command_runner.go:130] > # 	"FSETID",
	I1205 20:16:09.941954   30150 command_runner.go:130] > # 	"FOWNER",
	I1205 20:16:09.941965   30150 command_runner.go:130] > # 	"SETGID",
	I1205 20:16:09.941975   30150 command_runner.go:130] > # 	"SETUID",
	I1205 20:16:09.941986   30150 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:16:09.941995   30150 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:16:09.942002   30150 command_runner.go:130] > # 	"KILL",
	I1205 20:16:09.942010   30150 command_runner.go:130] > # ]
	I1205 20:16:09.942022   30150 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:16:09.942036   30150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:16:09.942047   30150 command_runner.go:130] > # default_sysctls = [
	I1205 20:16:09.942056   30150 command_runner.go:130] > # ]
	I1205 20:16:09.942068   30150 command_runner.go:130] > # List of devices on the host that a
	I1205 20:16:09.942080   30150 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:16:09.942089   30150 command_runner.go:130] > # allowed_devices = [
	I1205 20:16:09.942097   30150 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:16:09.942106   30150 command_runner.go:130] > # ]
	I1205 20:16:09.942115   30150 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:16:09.942131   30150 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:16:09.942145   30150 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:16:09.942218   30150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:16:09.942233   30150 command_runner.go:130] > # additional_devices = [
	I1205 20:16:09.942239   30150 command_runner.go:130] > # ]
	I1205 20:16:09.942247   30150 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:16:09.942255   30150 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:16:09.942265   30150 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:16:09.942295   30150 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:16:09.942302   30150 command_runner.go:130] > # ]
	I1205 20:16:09.942316   30150 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:16:09.942331   30150 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:16:09.942342   30150 command_runner.go:130] > # Defaults to false.
	I1205 20:16:09.942353   30150 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:16:09.942367   30150 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:16:09.942380   30150 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:16:09.942388   30150 command_runner.go:130] > # hooks_dir = [
	I1205 20:16:09.942400   30150 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:16:09.942409   30150 command_runner.go:130] > # ]
	I1205 20:16:09.942420   30150 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:16:09.942434   30150 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:16:09.942447   30150 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:16:09.942456   30150 command_runner.go:130] > #
	I1205 20:16:09.942467   30150 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:16:09.942482   30150 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:16:09.942495   30150 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:16:09.942506   30150 command_runner.go:130] > #
	I1205 20:16:09.942520   30150 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:16:09.942535   30150 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:16:09.942549   30150 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:16:09.942560   30150 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:16:09.942569   30150 command_runner.go:130] > #
	I1205 20:16:09.942578   30150 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:16:09.942590   30150 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:16:09.942606   30150 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:16:09.942616   30150 command_runner.go:130] > pids_limit = 1024
	I1205 20:16:09.942631   30150 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:16:09.942645   30150 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:16:09.942659   30150 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:16:09.942676   30150 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:16:09.942686   30150 command_runner.go:130] > # log_size_max = -1
	I1205 20:16:09.942702   30150 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:16:09.942713   30150 command_runner.go:130] > # log_to_journald = false
	I1205 20:16:09.942725   30150 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:16:09.942734   30150 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:16:09.942747   30150 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:16:09.942760   30150 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:16:09.942777   30150 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:16:09.942788   30150 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:16:09.942799   30150 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:16:09.942809   30150 command_runner.go:130] > # read_only = false
	I1205 20:16:09.942821   30150 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:16:09.942833   30150 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:16:09.942844   30150 command_runner.go:130] > # live configuration reload.
	I1205 20:16:09.942852   30150 command_runner.go:130] > # log_level = "info"
	I1205 20:16:09.942864   30150 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:16:09.942877   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:16:09.942888   30150 command_runner.go:130] > # log_filter = ""
	I1205 20:16:09.942902   30150 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:16:09.942916   30150 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:16:09.942926   30150 command_runner.go:130] > # separated by comma.
	I1205 20:16:09.942936   30150 command_runner.go:130] > # uid_mappings = ""
	I1205 20:16:09.942948   30150 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:16:09.942964   30150 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:16:09.942975   30150 command_runner.go:130] > # separated by comma.
	I1205 20:16:09.942986   30150 command_runner.go:130] > # gid_mappings = ""
	I1205 20:16:09.942998   30150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:16:09.943012   30150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:16:09.943026   30150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:16:09.943037   30150 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:16:09.943049   30150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:16:09.943062   30150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:16:09.943075   30150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:16:09.943091   30150 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:16:09.943105   30150 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:16:09.943119   30150 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:16:09.943133   30150 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:16:09.943142   30150 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:16:09.943153   30150 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:16:09.943167   30150 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:16:09.943179   30150 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:16:09.943191   30150 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:16:09.943206   30150 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:16:09.943220   30150 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:16:09.943233   30150 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:16:09.943247   30150 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:16:09.943258   30150 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:16:09.943270   30150 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:16:09.943282   30150 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:16:09.943294   30150 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:16:09.943309   30150 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:16:09.943323   30150 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:16:09.943335   30150 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:16:09.943349   30150 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:16:09.943363   30150 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:16:09.943374   30150 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:16:09.943387   30150 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:16:09.943403   30150 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:16:09.943422   30150 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:16:09.943434   30150 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:16:09.943452   30150 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:16:09.943464   30150 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:16:09.943476   30150 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:16:09.943486   30150 command_runner.go:130] > # ]
	I1205 20:16:09.943499   30150 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:16:09.943513   30150 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:16:09.943527   30150 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:16:09.943541   30150 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:16:09.943550   30150 command_runner.go:130] > #
	I1205 20:16:09.943566   30150 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:16:09.943578   30150 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:16:09.943586   30150 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:16:09.943598   30150 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:16:09.943610   30150 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:16:09.943621   30150 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:16:09.943631   30150 command_runner.go:130] > # Where:
	I1205 20:16:09.943642   30150 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:16:09.943657   30150 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:16:09.943670   30150 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:16:09.943685   30150 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:16:09.943695   30150 command_runner.go:130] > #   in $PATH.
	I1205 20:16:09.943709   30150 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:16:09.943721   30150 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:16:09.943735   30150 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:16:09.943745   30150 command_runner.go:130] > #   state.
	I1205 20:16:09.943760   30150 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:16:09.943777   30150 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:16:09.943794   30150 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:16:09.943808   30150 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:16:09.943836   30150 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:16:09.943850   30150 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:16:09.943862   30150 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:16:09.943877   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:16:09.943892   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:16:09.943905   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:16:09.943919   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:16:09.943936   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:16:09.943951   30150 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:16:09.943965   30150 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:16:09.943980   30150 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:16:09.943993   30150 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:16:09.944004   30150 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:16:09.944015   30150 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:16:09.944025   30150 command_runner.go:130] > runtime_type = "oci"
	I1205 20:16:09.944033   30150 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:16:09.944045   30150 command_runner.go:130] > runtime_config_path = ""
	I1205 20:16:09.944056   30150 command_runner.go:130] > monitor_path = ""
	I1205 20:16:09.944066   30150 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:16:09.944075   30150 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:16:09.944089   30150 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:16:09.944100   30150 command_runner.go:130] > # running containers
	I1205 20:16:09.944111   30150 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:16:09.944123   30150 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:16:09.944161   30150 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:16:09.944174   30150 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:16:09.944187   30150 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:16:09.944199   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:16:09.944212   30150 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:16:09.944221   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:16:09.944233   30150 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:16:09.944245   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:16:09.944259   30150 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:16:09.944271   30150 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:16:09.944286   30150 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:16:09.944302   30150 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:16:09.944319   30150 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:16:09.944332   30150 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:16:09.944351   30150 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:16:09.944368   30150 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:16:09.944381   30150 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:16:09.944396   30150 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:16:09.944406   30150 command_runner.go:130] > # Example:
	I1205 20:16:09.944415   30150 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:16:09.944427   30150 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:16:09.944439   30150 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:16:09.944452   30150 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:16:09.944461   30150 command_runner.go:130] > # cpuset = 0
	I1205 20:16:09.944469   30150 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:16:09.944478   30150 command_runner.go:130] > # Where:
	I1205 20:16:09.944489   30150 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:16:09.944504   30150 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:16:09.944522   30150 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:16:09.944535   30150 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:16:09.944552   30150 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:16:09.944565   30150 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:16:09.944574   30150 command_runner.go:130] > # 
	I1205 20:16:09.944586   30150 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:16:09.944595   30150 command_runner.go:130] > #
	I1205 20:16:09.944606   30150 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:16:09.944619   30150 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:16:09.944633   30150 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:16:09.944648   30150 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:16:09.944662   30150 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:16:09.944671   30150 command_runner.go:130] > [crio.image]
	I1205 20:16:09.944683   30150 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:16:09.944695   30150 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:16:09.944706   30150 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:16:09.944720   30150 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:16:09.944731   30150 command_runner.go:130] > # global_auth_file = ""
	I1205 20:16:09.944743   30150 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:16:09.944756   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:16:09.944767   30150 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:16:09.944787   30150 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:16:09.944800   30150 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:16:09.944813   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:16:09.944823   30150 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:16:09.944834   30150 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:16:09.944848   30150 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:16:09.944862   30150 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:16:09.944875   30150 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:16:09.944886   30150 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:16:09.944898   30150 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:16:09.944912   30150 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:16:09.944923   30150 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:16:09.944937   30150 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:16:09.944950   30150 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:16:09.944961   30150 command_runner.go:130] > # signature_policy = ""
	I1205 20:16:09.944973   30150 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:16:09.944988   30150 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:16:09.944998   30150 command_runner.go:130] > # changing them here.
	I1205 20:16:09.945006   30150 command_runner.go:130] > # insecure_registries = [
	I1205 20:16:09.945015   30150 command_runner.go:130] > # ]
	I1205 20:16:09.945032   30150 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:16:09.945044   30150 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:16:09.945055   30150 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:16:09.945065   30150 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:16:09.945076   30150 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:16:09.945087   30150 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:16:09.945095   30150 command_runner.go:130] > # CNI plugins.
	I1205 20:16:09.945103   30150 command_runner.go:130] > [crio.network]
	I1205 20:16:09.945117   30150 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:16:09.945130   30150 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:16:09.945140   30150 command_runner.go:130] > # cni_default_network = ""
	I1205 20:16:09.945153   30150 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:16:09.945165   30150 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:16:09.945178   30150 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:16:09.945189   30150 command_runner.go:130] > # plugin_dirs = [
	I1205 20:16:09.945197   30150 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:16:09.945207   30150 command_runner.go:130] > # ]
	I1205 20:16:09.945218   30150 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:16:09.945228   30150 command_runner.go:130] > [crio.metrics]
	I1205 20:16:09.945239   30150 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:16:09.945248   30150 command_runner.go:130] > enable_metrics = true
	I1205 20:16:09.945257   30150 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:16:09.945269   30150 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:16:09.945283   30150 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:16:09.945297   30150 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:16:09.945311   30150 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:16:09.945322   30150 command_runner.go:130] > # metrics_collectors = [
	I1205 20:16:09.945330   30150 command_runner.go:130] > # 	"operations",
	I1205 20:16:09.945339   30150 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:16:09.945350   30150 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:16:09.945362   30150 command_runner.go:130] > # 	"operations_errors",
	I1205 20:16:09.945377   30150 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:16:09.945388   30150 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:16:09.945399   30150 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:16:09.945409   30150 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:16:09.945417   30150 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:16:09.945428   30150 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:16:09.945436   30150 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:16:09.945447   30150 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:16:09.945457   30150 command_runner.go:130] > # 	"containers_oom",
	I1205 20:16:09.945465   30150 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:16:09.945476   30150 command_runner.go:130] > # 	"operations_total",
	I1205 20:16:09.945484   30150 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:16:09.945495   30150 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:16:09.945503   30150 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:16:09.945515   30150 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:16:09.945524   30150 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:16:09.945535   30150 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:16:09.945546   30150 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:16:09.945560   30150 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:16:09.945571   30150 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:16:09.945578   30150 command_runner.go:130] > # ]
	I1205 20:16:09.945587   30150 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:16:09.945598   30150 command_runner.go:130] > # metrics_port = 9090
	I1205 20:16:09.945611   30150 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:16:09.945621   30150 command_runner.go:130] > # metrics_socket = ""
	I1205 20:16:09.945632   30150 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:16:09.945646   30150 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:16:09.945661   30150 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:16:09.945671   30150 command_runner.go:130] > # certificate on any modification event.
	I1205 20:16:09.945679   30150 command_runner.go:130] > # metrics_cert = ""
	I1205 20:16:09.945693   30150 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:16:09.945705   30150 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:16:09.945716   30150 command_runner.go:130] > # metrics_key = ""
	I1205 20:16:09.945729   30150 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:16:09.945739   30150 command_runner.go:130] > [crio.tracing]
	I1205 20:16:09.945753   30150 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:16:09.945765   30150 command_runner.go:130] > # enable_tracing = false
	I1205 20:16:09.945781   30150 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:16:09.945793   30150 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:16:09.945806   30150 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:16:09.945818   30150 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:16:09.945831   30150 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:16:09.945841   30150 command_runner.go:130] > [crio.stats]
	I1205 20:16:09.945853   30150 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:16:09.945866   30150 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:16:09.945878   30150 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:16:09.945957   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:16:09.945967   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:16:09.945978   30150 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:16:09.946004   30150 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-558947 NodeName:multinode-558947-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:16:09.946145   30150 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-558947-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:16:09.946250   30150 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-558947-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:16:09.946331   30150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:16:09.957070   30150 command_runner.go:130] > kubeadm
	I1205 20:16:09.957091   30150 command_runner.go:130] > kubectl
	I1205 20:16:09.957095   30150 command_runner.go:130] > kubelet
	I1205 20:16:09.957112   30150 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:16:09.957161   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1205 20:16:09.967690   30150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1205 20:16:09.984758   30150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:16:10.003233   30150 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I1205 20:16:10.007661   30150 command_runner.go:130] > 192.168.39.3	control-plane.minikube.internal
	I1205 20:16:10.007732   30150 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:16:10.008073   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:16:10.008113   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:16:10.008150   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:16:10.023010   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I1205 20:16:10.023497   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:16:10.024102   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:16:10.024128   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:16:10.024496   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:16:10.024688   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:16:10.024829   30150 start.go:304] JoinCluster: &{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:16:10.024949   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:16:10.024963   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:16:10.028218   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:16:10.028689   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:16:10.028716   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:16:10.028978   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:16:10.029187   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:16:10.029365   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:16:10.029499   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:16:10.233858   30150 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1wdme8.vryuq5q8s96db0op --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:16:10.237280   30150 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:16:10.237328   30150 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:16:10.237613   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:16:10.237649   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:16:10.252029   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I1205 20:16:10.252516   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:16:10.253001   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:16:10.253026   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:16:10.253358   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:16:10.253525   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:16:10.253693   30150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-558947-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1205 20:16:10.253717   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:16:10.256843   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:16:10.257325   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:16:10.257360   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:16:10.257512   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:16:10.257722   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:16:10.257884   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:16:10.258326   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:16:10.432738   30150 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1205 20:16:10.503554   30150 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-xcs7j, kube-system/kube-proxy-kjph8
	I1205 20:16:13.531060   30150 command_runner.go:130] > node/multinode-558947-m02 cordoned
	I1205 20:16:13.531086   30150 command_runner.go:130] > pod "busybox-5bc68d56bd-phsxm" has DeletionTimestamp older than 1 seconds, skipping
	I1205 20:16:13.531092   30150 command_runner.go:130] > node/multinode-558947-m02 drained
	I1205 20:16:13.531114   30150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-558947-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.277397422s)
	I1205 20:16:13.531128   30150 node.go:108] successfully drained node "m02"
	I1205 20:16:13.531497   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:16:13.531700   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:16:13.532003   30150 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1205 20:16:13.532052   30150 round_trippers.go:463] DELETE https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:13.532060   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:13.532067   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:13.532073   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:13.532082   30150 round_trippers.go:473]     Content-Type: application/json
	I1205 20:16:13.548186   30150 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1205 20:16:13.548215   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:13.548226   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:13.548233   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:13.548240   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:13.548250   30150 round_trippers.go:580]     Content-Length: 171
	I1205 20:16:13.548257   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:13 GMT
	I1205 20:16:13.548265   30150 round_trippers.go:580]     Audit-Id: 91ac7d4a-c255-4d12-8467-92ec5be5a343
	I1205 20:16:13.548274   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:13.548301   30150 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-558947-m02","kind":"nodes","uid":"a5bfed42-3895-4f12-8735-ab3519f101aa"}}
	I1205 20:16:13.548334   30150 node.go:124] successfully deleted node "m02"
	I1205 20:16:13.548347   30150 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:16:13.548372   30150 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:16:13.548398   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1wdme8.vryuq5q8s96db0op --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-558947-m02"
	I1205 20:16:13.602902   30150 command_runner.go:130] ! W1205 20:16:13.594954    2662 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1205 20:16:13.603308   30150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1205 20:16:13.751076   30150 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1205 20:16:13.751101   30150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1205 20:16:14.484428   30150 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:16:14.484464   30150 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1205 20:16:14.484478   30150 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1205 20:16:14.484490   30150 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:16:14.484502   30150 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:16:14.484509   30150 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:16:14.484520   30150 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1205 20:16:14.484540   30150 command_runner.go:130] > This node has joined the cluster:
	I1205 20:16:14.484550   30150 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1205 20:16:14.484559   30150 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1205 20:16:14.484573   30150 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1205 20:16:14.484600   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:16:14.732784   30150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-558947 minikube.k8s.io/updated_at=2023_12_05T20_16_14_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:16:14.838392   30150 command_runner.go:130] > node/multinode-558947-m02 labeled
	I1205 20:16:14.847816   30150 command_runner.go:130] > node/multinode-558947-m03 labeled
	I1205 20:16:14.849653   30150 start.go:306] JoinCluster complete in 4.824820606s
	I1205 20:16:14.849679   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:16:14.849686   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:16:14.849756   30150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:16:14.854801   30150 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:16:14.854827   30150 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1205 20:16:14.854837   30150 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1205 20:16:14.854847   30150 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:16:14.854864   30150 command_runner.go:130] > Access: 2023-12-05 20:13:42.646892622 +0000
	I1205 20:16:14.854876   30150 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1205 20:16:14.854887   30150 command_runner.go:130] > Change: 2023-12-05 20:13:40.685892622 +0000
	I1205 20:16:14.854897   30150 command_runner.go:130] >  Birth: -
	I1205 20:16:14.855164   30150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:16:14.855186   30150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:16:14.875302   30150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:16:15.252466   30150 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:16:15.252499   30150 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:16:15.252509   30150 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1205 20:16:15.252518   30150 command_runner.go:130] > daemonset.apps/kindnet configured
	I1205 20:16:15.252900   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:16:15.253137   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:16:15.253444   30150 round_trippers.go:463] GET https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:16:15.253457   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.253464   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.253472   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.256232   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:15.256254   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.256263   30150 round_trippers.go:580]     Audit-Id: 65aed493-e269-4d17-a79e-d55fba7a5ae8
	I1205 20:16:15.256272   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.256281   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.256288   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.256297   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.256304   30150 round_trippers.go:580]     Content-Length: 291
	I1205 20:16:15.256313   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.256329   30150 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"909","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1205 20:16:15.256407   30150 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-558947" context rescaled to 1 replicas
	I1205 20:16:15.256433   30150 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1205 20:16:15.258071   30150 out.go:177] * Verifying Kubernetes components...
	I1205 20:16:15.259451   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:16:15.274803   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:16:15.275071   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:16:15.275285   30150 node_ready.go:35] waiting up to 6m0s for node "multinode-558947-m02" to be "Ready" ...
	I1205 20:16:15.275341   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:15.275361   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.275368   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.275378   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.277871   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:15.277888   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.277895   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.277900   30150 round_trippers.go:580]     Audit-Id: f425855c-e29b-461f-834d-8be59ad76743
	I1205 20:16:15.277906   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.277910   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.277915   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.277943   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.278131   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"0402e7e4-4e77-4d49-9b99-eee89333fa24","resourceVersion":"1060","creationTimestamp":"2023-12-05T20:16:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_16_14_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1205 20:16:15.278399   30150 node_ready.go:49] node "multinode-558947-m02" has status "Ready":"True"
	I1205 20:16:15.278413   30150 node_ready.go:38] duration metric: took 3.113769ms waiting for node "multinode-558947-m02" to be "Ready" ...
	I1205 20:16:15.278420   30150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:16:15.278471   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:16:15.278480   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.278486   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.278494   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.282127   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:15.282147   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.282156   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.282165   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.282173   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.282186   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.282208   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.282217   30150 round_trippers.go:580]     Audit-Id: cb5afc6f-513b-43fa-8a23-ad0e23e91a52
	I1205 20:16:15.284651   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1068"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82158 chars]
	I1205 20:16:15.287023   30150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.287085   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:16:15.287091   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.287098   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.287107   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.289554   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:15.289574   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.289583   30150 round_trippers.go:580]     Audit-Id: ceac1f56-d0b5-4dbd-ba1c-4af8b0b6658e
	I1205 20:16:15.289590   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.289598   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.289605   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.289613   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.289620   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.289733   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:16:15.290175   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:15.290191   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.290198   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.290203   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.292138   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:16:15.292151   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.292157   30150 round_trippers.go:580]     Audit-Id: 9e1dbbdb-767d-4808-9847-b082cbfc492b
	I1205 20:16:15.292162   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.292167   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.292172   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.292177   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.292184   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.292528   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:16:15.292895   30150 pod_ready.go:92] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:15.292913   30150 pod_ready.go:81] duration metric: took 5.87056ms waiting for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.292921   30150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.292962   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-558947
	I1205 20:16:15.292970   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.292976   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.292982   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.295008   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:15.295022   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.295027   30150 round_trippers.go:580]     Audit-Id: e752556a-84c5-4bc6-993f-58ab476eec28
	I1205 20:16:15.295033   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.295038   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.295043   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.295048   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.295053   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.295224   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"895","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:16:15.295607   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:15.295623   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.295630   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.295636   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.297585   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:16:15.297597   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.297602   30150 round_trippers.go:580]     Audit-Id: 7d4696d5-80cd-4f02-8d16-6f2bc331f2e5
	I1205 20:16:15.297610   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.297618   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.297633   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.297640   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.297650   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.297939   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:16:15.298217   30150 pod_ready.go:92] pod "etcd-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:15.298231   30150 pod_ready.go:81] duration metric: took 5.30517ms waiting for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.298246   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.298310   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-558947
	I1205 20:16:15.298319   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.298326   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.298332   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.300440   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:15.300460   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.300468   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.300476   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.300484   30150 round_trippers.go:580]     Audit-Id: 32eecc6b-049f-42fd-917e-b6fe8688a30d
	I1205 20:16:15.300498   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.300506   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.300517   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.300690   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-558947","namespace":"kube-system","uid":"36300192-b165-4bee-b791-9fce329428f9","resourceVersion":"871","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.3:8443","kubernetes.io/config.hash":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.mirror":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.seen":"2023-12-05T20:03:56.146037812Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7371 chars]
	I1205 20:16:15.301028   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:15.301038   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.301045   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.301050   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.302972   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:16:15.302992   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.303001   30150 round_trippers.go:580]     Audit-Id: 64d6d46b-e2d2-4da7-84fe-17abbdc9a679
	I1205 20:16:15.303009   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.303017   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.303031   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.303038   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.303047   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.303183   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:16:15.303546   30150 pod_ready.go:92] pod "kube-apiserver-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:15.303566   30150 pod_ready.go:81] duration metric: took 5.312414ms waiting for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.303578   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.303631   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-558947
	I1205 20:16:15.303647   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.303658   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.303671   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.305549   30150 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1205 20:16:15.305569   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.305581   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.305590   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.305601   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.305618   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.305625   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.305636   30150 round_trippers.go:580]     Audit-Id: 2ebabf9f-b96c-4b5f-a78a-695303a32fc2
	I1205 20:16:15.305833   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-558947","namespace":"kube-system","uid":"49ee6fa8-b7cd-4880-b4db-a1717b685750","resourceVersion":"883","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.mirror":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.seen":"2023-12-05T20:03:56.146038937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6946 chars]
	I1205 20:16:15.306194   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:15.306206   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.306213   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.306218   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.309299   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:15.309314   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.309320   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.309326   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.309333   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.309341   30150 round_trippers.go:580]     Audit-Id: 76395a62-229e-407f-acf8-b512781fb259
	I1205 20:16:15.309358   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.309366   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.309491   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:16:15.309785   30150 pod_ready.go:92] pod "kube-controller-manager-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:15.309799   30150 pod_ready.go:81] duration metric: took 6.211365ms waiting for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.309808   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:15.476074   30150 request.go:629] Waited for 166.205713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:16:15.476146   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:16:15.476158   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.476169   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.476181   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.479492   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:15.479520   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.479528   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.479536   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.479542   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.479548   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.479557   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.479571   30150 round_trippers.go:580]     Audit-Id: fbf3426c-9a3a-450f-9964-81bb4594e726
	I1205 20:16:15.479746   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"1065","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1205 20:16:15.675546   30150 request.go:629] Waited for 195.28415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:15.675622   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:15.675633   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.675644   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.675653   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.678623   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:15.678646   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.678655   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.678664   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.678672   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.678681   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.678692   30150 round_trippers.go:580]     Audit-Id: 36576de5-d7af-427f-ac11-61eada799d4d
	I1205 20:16:15.678700   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.679184   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"0402e7e4-4e77-4d49-9b99-eee89333fa24","resourceVersion":"1060","creationTimestamp":"2023-12-05T20:16:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_16_14_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1205 20:16:15.875932   30150 request.go:629] Waited for 196.368868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:16:15.875982   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:16:15.875987   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:15.875994   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:15.876000   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:15.879169   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:15.879196   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:15.879205   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:15.879213   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:15 GMT
	I1205 20:16:15.879221   30150 round_trippers.go:580]     Audit-Id: 8fc0beca-205a-437c-b7d4-b109729cfcb7
	I1205 20:16:15.879229   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:15.879237   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:15.879244   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:15.879421   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"1065","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1205 20:16:16.075787   30150 request.go:629] Waited for 195.804959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:16.075863   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:16.075877   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:16.075889   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:16.075902   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:16.079241   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:16.079266   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:16.079277   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:16.079285   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:16.079293   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:16.079302   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:16.079311   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:16 GMT
	I1205 20:16:16.079320   30150 round_trippers.go:580]     Audit-Id: 9176acbd-b266-4663-be70-bb2ba63019ee
	I1205 20:16:16.079441   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"0402e7e4-4e77-4d49-9b99-eee89333fa24","resourceVersion":"1060","creationTimestamp":"2023-12-05T20:16:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_16_14_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1205 20:16:16.580668   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:16:16.580702   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:16.580718   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:16.580726   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:16.583775   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:16.583800   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:16.583808   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:16 GMT
	I1205 20:16:16.583813   30150 round_trippers.go:580]     Audit-Id: 7a98f6d6-3a22-4e61-9857-374a7bb746d8
	I1205 20:16:16.583819   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:16.583836   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:16.583844   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:16.583849   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:16.584065   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"1081","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1205 20:16:16.584461   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:16:16.584473   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:16.584480   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:16.584486   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:16.587123   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:16.588238   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:16.588248   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:16.588254   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:16.588259   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:16 GMT
	I1205 20:16:16.588267   30150 round_trippers.go:580]     Audit-Id: 9fbf5e36-be50-4d30-a572-e27a67476d9f
	I1205 20:16:16.588272   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:16.588279   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:16.588359   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"0402e7e4-4e77-4d49-9b99-eee89333fa24","resourceVersion":"1060","creationTimestamp":"2023-12-05T20:16:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_16_14_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1205 20:16:16.588602   30150 pod_ready.go:92] pod "kube-proxy-kjph8" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:16.588615   30150 pod_ready.go:81] duration metric: took 1.278802307s waiting for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:16.588622   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:16.675977   30150 request.go:629] Waited for 87.280492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:16:16.676032   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:16:16.676037   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:16.676048   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:16.676054   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:16.679035   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:16.679054   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:16.679060   30150 round_trippers.go:580]     Audit-Id: 0c227903-37a3-4da0-a2d8-587e4eda1b22
	I1205 20:16:16.679066   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:16.679071   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:16.679076   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:16.679081   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:16.679088   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:16 GMT
	I1205 20:16:16.679216   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgmt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"41275cfd-cb0f-4886-b1bc-a86b7e20cc14","resourceVersion":"783","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:16:16.875460   30150 request.go:629] Waited for 195.85576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:16.875535   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:16.875540   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:16.875550   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:16.875558   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:16.878299   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:16.878334   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:16.878342   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:16 GMT
	I1205 20:16:16.878348   30150 round_trippers.go:580]     Audit-Id: e726fada-ef59-495c-8b8f-097a718bedb3
	I1205 20:16:16.878353   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:16.878358   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:16.878363   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:16.878368   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:16.878537   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:16:16.878919   30150 pod_ready.go:92] pod "kube-proxy-mgmt2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:16.878939   30150 pod_ready.go:81] duration metric: took 290.311293ms waiting for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:16.878952   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:17.076376   30150 request.go:629] Waited for 197.358386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:16:17.076467   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:16:17.076479   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:17.076491   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:17.076505   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:17.079636   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:17.079657   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:17.079664   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:17.079669   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:17.079675   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:17.079682   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:17 GMT
	I1205 20:16:17.079689   30150 round_trippers.go:580]     Audit-Id: 3db53e2a-96fa-4d36-8daa-7ca254f6dc99
	I1205 20:16:17.079697   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:17.079993   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xvjj7","generateName":"kube-proxy-","namespace":"kube-system","uid":"19641919-0011-4726-b884-cc468d0f2dd0","resourceVersion":"724","creationTimestamp":"2023-12-05T20:05:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1205 20:16:17.275730   30150 request.go:629] Waited for 195.368682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:16:17.275805   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:16:17.275810   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:17.275817   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:17.275823   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:17.278914   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:17.278935   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:17.278942   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:17.278947   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:17.278952   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:17.278957   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:17.278963   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:17 GMT
	I1205 20:16:17.278968   30150 round_trippers.go:580]     Audit-Id: ca9674ef-8d22-45ba-b81f-8ec73b7b51fb
	I1205 20:16:17.279576   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m03","uid":"b3bc91db-0091-4e00-86c1-c071017fca0a","resourceVersion":"1061","creationTimestamp":"2023-12-05T20:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_16_14_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I1205 20:16:17.279866   30150 pod_ready.go:92] pod "kube-proxy-xvjj7" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:17.279883   30150 pod_ready.go:81] duration metric: took 400.92514ms waiting for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:17.279891   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:17.476355   30150 request.go:629] Waited for 196.407343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:16:17.476448   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:16:17.476456   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:17.476466   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:17.476476   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:17.479476   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:17.479497   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:17.479504   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:17.479510   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:17 GMT
	I1205 20:16:17.479515   30150 round_trippers.go:580]     Audit-Id: 1ad0f118-e3c5-46cb-ae96-b96f2492ade1
	I1205 20:16:17.479520   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:17.479525   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:17.479530   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:17.479856   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-558947","namespace":"kube-system","uid":"526e311f-432f-4c9a-ad6e-19855cae55be","resourceVersion":"897","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.mirror":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.seen":"2023-12-05T20:03:56.146039635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:16:17.675521   30150 request.go:629] Waited for 195.28963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:17.675590   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:16:17.675595   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:17.675602   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:17.675608   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:17.678374   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:16:17.678396   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:17.678405   30150 round_trippers.go:580]     Audit-Id: 921b5a2e-c52e-46e9-a369-7bf315d694d4
	I1205 20:16:17.678412   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:17.678420   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:17.678428   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:17.678437   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:17.678449   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:17 GMT
	I1205 20:16:17.678590   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:16:17.678894   30150 pod_ready.go:92] pod "kube-scheduler-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:16:17.678908   30150 pod_ready.go:81] duration metric: took 399.010841ms waiting for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:16:17.678917   30150 pod_ready.go:38] duration metric: took 2.400488877s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:16:17.678932   30150 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:16:17.678974   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:16:17.693611   30150 system_svc.go:56] duration metric: took 14.673455ms WaitForService to wait for kubelet.
	I1205 20:16:17.693637   30150 kubeadm.go:581] duration metric: took 2.437172068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:16:17.693659   30150 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:16:17.876140   30150 request.go:629] Waited for 182.391883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I1205 20:16:17.876217   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I1205 20:16:17.876242   30150 round_trippers.go:469] Request Headers:
	I1205 20:16:17.876257   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:16:17.876263   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:16:17.879363   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:16:17.879385   30150 round_trippers.go:577] Response Headers:
	I1205 20:16:17.879395   30150 round_trippers.go:580]     Audit-Id: 80c1913a-afe5-4f3f-89d3-a81cf47b2fb9
	I1205 20:16:17.879402   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:16:17.879409   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:16:17.879417   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:16:17.879426   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:16:17.879438   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:16:17 GMT
	I1205 20:16:17.880103   30150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1086"},"items":[{"metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16208 chars]
	I1205 20:16:17.880675   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:16:17.880695   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:16:17.880707   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:16:17.880713   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:16:17.880719   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:16:17.880724   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:16:17.880734   30150 node_conditions.go:105] duration metric: took 187.064276ms to run NodePressure ...
	I1205 20:16:17.880751   30150 start.go:228] waiting for startup goroutines ...
	I1205 20:16:17.880774   30150 start.go:242] writing updated cluster config ...
	I1205 20:16:17.881208   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:16:17.881321   30150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:16:17.883908   30150 out.go:177] * Starting worker node multinode-558947-m03 in cluster multinode-558947
	I1205 20:16:17.885249   30150 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:16:17.885271   30150 cache.go:56] Caching tarball of preloaded images
	I1205 20:16:17.885363   30150 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:16:17.885376   30150 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:16:17.885467   30150 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/config.json ...
	I1205 20:16:17.885634   30150 start.go:365] acquiring machines lock for multinode-558947-m03: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:16:17.885683   30150 start.go:369] acquired machines lock for "multinode-558947-m03" in 29.439µs
	I1205 20:16:17.885704   30150 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:16:17.885712   30150 fix.go:54] fixHost starting: m03
	I1205 20:16:17.885963   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:16:17.886002   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:16:17.901472   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I1205 20:16:17.901898   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:16:17.902341   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:16:17.902370   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:16:17.902649   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:16:17.902856   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:16:17.903013   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetState
	I1205 20:16:17.904558   30150 fix.go:102] recreateIfNeeded on multinode-558947-m03: state=Running err=<nil>
	W1205 20:16:17.904578   30150 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:16:17.906525   30150 out.go:177] * Updating the running kvm2 "multinode-558947-m03" VM ...
	I1205 20:16:17.907912   30150 machine.go:88] provisioning docker machine ...
	I1205 20:16:17.907929   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:16:17.908140   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetMachineName
	I1205 20:16:17.908318   30150 buildroot.go:166] provisioning hostname "multinode-558947-m03"
	I1205 20:16:17.908334   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetMachineName
	I1205 20:16:17.908468   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:16:17.910865   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:17.911324   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:16:17.911352   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:17.911492   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:16:17.911657   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:17.911813   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:17.911933   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:16:17.912080   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:16:17.912377   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1205 20:16:17.912389   30150 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-558947-m03 && echo "multinode-558947-m03" | sudo tee /etc/hostname
	I1205 20:16:18.059427   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-558947-m03
	
	I1205 20:16:18.059454   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:16:18.062118   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.062508   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:16:18.062541   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.062729   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:16:18.062947   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:18.063110   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:18.063266   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:16:18.063405   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:16:18.063881   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1205 20:16:18.063911   30150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-558947-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-558947-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-558947-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:16:18.199344   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:16:18.199381   30150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:16:18.199400   30150 buildroot.go:174] setting up certificates
	I1205 20:16:18.199412   30150 provision.go:83] configureAuth start
	I1205 20:16:18.199426   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetMachineName
	I1205 20:16:18.199693   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetIP
	I1205 20:16:18.202335   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.202667   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:16:18.202705   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.202866   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:16:18.205069   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.205395   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:16:18.205416   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.205589   30150 provision.go:138] copyHostCerts
	I1205 20:16:18.205623   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:16:18.205659   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:16:18.205671   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:16:18.205752   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:16:18.205835   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:16:18.205855   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:16:18.205864   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:16:18.205903   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:16:18.205959   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:16:18.205983   30150 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:16:18.205992   30150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:16:18.206025   30150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:16:18.206091   30150 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.multinode-558947-m03 san=[192.168.39.248 192.168.39.248 localhost 127.0.0.1 minikube multinode-558947-m03]
	I1205 20:16:18.261987   30150 provision.go:172] copyRemoteCerts
	I1205 20:16:18.262039   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:16:18.262063   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:16:18.264518   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.264814   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:16:18.264840   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.264989   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:16:18.265150   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:18.265259   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:16:18.265366   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m03/id_rsa Username:docker}
	I1205 20:16:18.360085   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:16:18.360155   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:16:18.382407   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:16:18.382467   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:16:18.404915   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:16:18.404989   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1205 20:16:18.428135   30150 provision.go:86] duration metric: configureAuth took 228.709296ms
	I1205 20:16:18.428161   30150 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:16:18.428393   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:16:18.428473   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:16:18.430944   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.431326   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:16:18.431355   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:16:18.431524   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:16:18.431723   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:18.431904   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:16:18.432030   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:16:18.432189   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:16:18.432487   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1205 20:16:18.432503   30150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:17:48.987621   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:17:48.987656   30150 machine.go:91] provisioned docker machine in 1m31.079732969s
	I1205 20:17:48.987668   30150 start.go:300] post-start starting for "multinode-558947-m03" (driver="kvm2")
	I1205 20:17:48.987678   30150 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:17:48.987698   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:17:48.988053   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:17:48.988075   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:17:48.991263   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:48.991745   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:17:48.991797   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:48.991996   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:17:48.992178   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:17:48.992346   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:17:48.992476   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m03/id_rsa Username:docker}
	I1205 20:17:49.097457   30150 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:17:49.102065   30150 command_runner.go:130] > NAME=Buildroot
	I1205 20:17:49.102091   30150 command_runner.go:130] > VERSION=2021.02.12-1-gf888a99-dirty
	I1205 20:17:49.102098   30150 command_runner.go:130] > ID=buildroot
	I1205 20:17:49.102108   30150 command_runner.go:130] > VERSION_ID=2021.02.12
	I1205 20:17:49.102115   30150 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1205 20:17:49.102220   30150 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:17:49.102248   30150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:17:49.102342   30150 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:17:49.102444   30150 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:17:49.102458   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /etc/ssl/certs/134102.pem
	I1205 20:17:49.102540   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:17:49.111908   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:17:49.136663   30150 start.go:303] post-start completed in 148.982386ms
	I1205 20:17:49.136686   30150 fix.go:56] fixHost completed within 1m31.25097635s
	I1205 20:17:49.136711   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:17:49.139392   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.139764   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:17:49.139800   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.139932   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:17:49.140122   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:17:49.140284   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:17:49.140406   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:17:49.140570   30150 main.go:141] libmachine: Using SSH client type: native
	I1205 20:17:49.141005   30150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I1205 20:17:49.141019   30150 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:17:49.275533   30150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701807469.267626981
	
	I1205 20:17:49.275560   30150 fix.go:206] guest clock: 1701807469.267626981
	I1205 20:17:49.275569   30150 fix.go:219] Guest: 2023-12-05 20:17:49.267626981 +0000 UTC Remote: 2023-12-05 20:17:49.136690221 +0000 UTC m=+557.599249087 (delta=130.93676ms)
	I1205 20:17:49.275588   30150 fix.go:190] guest clock delta is within tolerance: 130.93676ms
	I1205 20:17:49.275599   30150 start.go:83] releasing machines lock for "multinode-558947-m03", held for 1m31.389899508s
	I1205 20:17:49.275665   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:17:49.275957   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetIP
	I1205 20:17:49.278519   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.278813   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:17:49.278844   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.280828   30150 out.go:177] * Found network options:
	I1205 20:17:49.282449   30150 out.go:177]   - NO_PROXY=192.168.39.3,192.168.39.10
	W1205 20:17:49.284197   30150 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:17:49.284220   30150 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:17:49.284234   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:17:49.284998   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:17:49.285218   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .DriverName
	I1205 20:17:49.285304   30150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:17:49.285350   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	W1205 20:17:49.285414   30150 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:17:49.285436   30150 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:17:49.285501   30150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:17:49.285525   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHHostname
	I1205 20:17:49.288532   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.288823   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.288984   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:17:49.289037   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.289167   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:17:49.289315   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:17:49.289351   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:17:49.289381   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:49.289522   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:17:49.289528   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHPort
	I1205 20:17:49.289703   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHKeyPath
	I1205 20:17:49.289719   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m03/id_rsa Username:docker}
	I1205 20:17:49.289861   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetSSHUsername
	I1205 20:17:49.290013   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m03/id_rsa Username:docker}
	I1205 20:17:49.544987   30150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 20:17:49.544998   30150 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 20:17:49.551302   30150 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 20:17:49.551351   30150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:17:49.551412   30150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:17:49.561917   30150 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:17:49.561944   30150 start.go:475] detecting cgroup driver to use...
	I1205 20:17:49.562011   30150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:17:49.577260   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:17:49.590164   30150 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:17:49.590235   30150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:17:49.605748   30150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:17:49.619189   30150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:17:49.766956   30150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:17:49.905135   30150 docker.go:219] disabling docker service ...
	I1205 20:17:49.905203   30150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:17:49.920471   30150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:17:49.934385   30150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:17:50.068260   30150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:17:50.183845   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:17:50.196772   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:17:50.214374   30150 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 20:17:50.214858   30150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:17:50.214922   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:50.224586   30150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:17:50.224652   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:50.234477   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:50.243876   30150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:17:50.253527   30150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:17:50.263400   30150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:17:50.271755   30150 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 20:17:50.271849   30150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:17:50.281331   30150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:17:50.408350   30150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:17:50.658906   30150 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:17:50.658970   30150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:17:50.664870   30150 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 20:17:50.664900   30150 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 20:17:50.664911   30150 command_runner.go:130] > Device: 16h/22d	Inode: 1159        Links: 1
	I1205 20:17:50.664923   30150 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:17:50.664930   30150 command_runner.go:130] > Access: 2023-12-05 20:17:50.572774957 +0000
	I1205 20:17:50.664942   30150 command_runner.go:130] > Modify: 2023-12-05 20:17:50.572774957 +0000
	I1205 20:17:50.664950   30150 command_runner.go:130] > Change: 2023-12-05 20:17:50.572774957 +0000
	I1205 20:17:50.664956   30150 command_runner.go:130] >  Birth: -
	I1205 20:17:50.665331   30150 start.go:543] Will wait 60s for crictl version
	I1205 20:17:50.665400   30150 ssh_runner.go:195] Run: which crictl
	I1205 20:17:50.669393   30150 command_runner.go:130] > /usr/bin/crictl
	I1205 20:17:50.669465   30150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:17:50.712261   30150 command_runner.go:130] > Version:  0.1.0
	I1205 20:17:50.712288   30150 command_runner.go:130] > RuntimeName:  cri-o
	I1205 20:17:50.712296   30150 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1205 20:17:50.712303   30150 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 20:17:50.713433   30150 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:17:50.713519   30150 ssh_runner.go:195] Run: crio --version
	I1205 20:17:50.778398   30150 command_runner.go:130] > crio version 1.24.1
	I1205 20:17:50.778421   30150 command_runner.go:130] > Version:          1.24.1
	I1205 20:17:50.778432   30150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:17:50.778439   30150 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:17:50.778448   30150 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:17:50.778455   30150 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:17:50.778461   30150 command_runner.go:130] > Compiler:         gc
	I1205 20:17:50.778470   30150 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:17:50.778481   30150 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:17:50.778493   30150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:17:50.778503   30150 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:17:50.778509   30150 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:17:50.778613   30150 ssh_runner.go:195] Run: crio --version
	I1205 20:17:50.826615   30150 command_runner.go:130] > crio version 1.24.1
	I1205 20:17:50.826641   30150 command_runner.go:130] > Version:          1.24.1
	I1205 20:17:50.826653   30150 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1205 20:17:50.826659   30150 command_runner.go:130] > GitTreeState:     dirty
	I1205 20:17:50.826667   30150 command_runner.go:130] > BuildDate:        2023-12-01T05:08:03Z
	I1205 20:17:50.826675   30150 command_runner.go:130] > GoVersion:        go1.19.9
	I1205 20:17:50.826681   30150 command_runner.go:130] > Compiler:         gc
	I1205 20:17:50.826688   30150 command_runner.go:130] > Platform:         linux/amd64
	I1205 20:17:50.826696   30150 command_runner.go:130] > Linkmode:         dynamic
	I1205 20:17:50.826715   30150 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1205 20:17:50.826720   30150 command_runner.go:130] > SeccompEnabled:   true
	I1205 20:17:50.826730   30150 command_runner.go:130] > AppArmorEnabled:  false
	I1205 20:17:50.830126   30150 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:17:50.831771   30150 out.go:177]   - env NO_PROXY=192.168.39.3
	I1205 20:17:50.833351   30150 out.go:177]   - env NO_PROXY=192.168.39.3,192.168.39.10
	I1205 20:17:50.834761   30150 main.go:141] libmachine: (multinode-558947-m03) Calling .GetIP
	I1205 20:17:50.837390   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:50.837754   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:10:68", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:06:13 +0000 UTC Type:0 Mac:52:54:00:b0:10:68 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:multinode-558947-m03 Clientid:01:52:54:00:b0:10:68}
	I1205 20:17:50.837783   30150 main.go:141] libmachine: (multinode-558947-m03) DBG | domain multinode-558947-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:b0:10:68 in network mk-multinode-558947
	I1205 20:17:50.837951   30150 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:17:50.842244   30150 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 20:17:50.842303   30150 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947 for IP: 192.168.39.248
	I1205 20:17:50.842326   30150 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:17:50.842493   30150 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:17:50.842535   30150 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:17:50.842547   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:17:50.842562   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:17:50.842574   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:17:50.842586   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:17:50.842631   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:17:50.842658   30150 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:17:50.842670   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:17:50.842694   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:17:50.842716   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:17:50.842744   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:17:50.842799   30150 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:17:50.842840   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> /usr/share/ca-certificates/134102.pem
	I1205 20:17:50.842860   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:50.842873   30150 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem -> /usr/share/ca-certificates/13410.pem
	I1205 20:17:50.843223   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:17:50.868049   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:17:50.892223   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:17:50.915168   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:17:50.938343   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:17:50.961015   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:17:50.984140   30150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:17:51.044216   30150 ssh_runner.go:195] Run: openssl version
	I1205 20:17:51.051232   30150 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1205 20:17:51.051359   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:17:51.077983   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:51.084641   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:51.085166   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:51.085217   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:17:51.090662   30150 command_runner.go:130] > b5213941
	I1205 20:17:51.091099   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:17:51.101216   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:17:51.120148   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:17:51.128017   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:17:51.128208   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:17:51.128255   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:17:51.133628   30150 command_runner.go:130] > 51391683
	I1205 20:17:51.133819   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:17:51.142258   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:17:51.152455   30150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:17:51.156858   30150 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:17:51.157197   30150 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:17:51.157254   30150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:17:51.162498   30150 command_runner.go:130] > 3ec20f2e
	I1205 20:17:51.162608   30150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:17:51.171559   30150 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:17:51.175548   30150 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:17:51.175586   30150 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:17:51.175674   30150 ssh_runner.go:195] Run: crio config
	I1205 20:17:51.240309   30150 command_runner.go:130] ! time="2023-12-05 20:17:51.232618317Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1205 20:17:51.240633   30150 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 20:17:51.254333   30150 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 20:17:51.254354   30150 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 20:17:51.254361   30150 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 20:17:51.254366   30150 command_runner.go:130] > #
	I1205 20:17:51.254378   30150 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 20:17:51.254389   30150 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 20:17:51.254400   30150 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 20:17:51.254412   30150 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 20:17:51.254422   30150 command_runner.go:130] > # reload'.
	I1205 20:17:51.254432   30150 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 20:17:51.254441   30150 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 20:17:51.254450   30150 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 20:17:51.254460   30150 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 20:17:51.254469   30150 command_runner.go:130] > [crio]
	I1205 20:17:51.254479   30150 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 20:17:51.254491   30150 command_runner.go:130] > # containers images, in this directory.
	I1205 20:17:51.254502   30150 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 20:17:51.254515   30150 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 20:17:51.254527   30150 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 20:17:51.254540   30150 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 20:17:51.254555   30150 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 20:17:51.254565   30150 command_runner.go:130] > storage_driver = "overlay"
	I1205 20:17:51.254575   30150 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 20:17:51.254587   30150 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 20:17:51.254594   30150 command_runner.go:130] > storage_option = [
	I1205 20:17:51.254605   30150 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 20:17:51.254611   30150 command_runner.go:130] > ]
	I1205 20:17:51.254624   30150 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 20:17:51.254636   30150 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 20:17:51.254647   30150 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 20:17:51.254659   30150 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 20:17:51.254672   30150 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 20:17:51.254683   30150 command_runner.go:130] > # always happen on a node reboot
	I1205 20:17:51.254691   30150 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 20:17:51.254700   30150 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 20:17:51.254708   30150 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 20:17:51.254719   30150 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 20:17:51.254727   30150 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1205 20:17:51.254739   30150 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 20:17:51.254755   30150 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 20:17:51.254765   30150 command_runner.go:130] > # internal_wipe = true
	I1205 20:17:51.254775   30150 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 20:17:51.254788   30150 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 20:17:51.254800   30150 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 20:17:51.254809   30150 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 20:17:51.254822   30150 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 20:17:51.254831   30150 command_runner.go:130] > [crio.api]
	I1205 20:17:51.254840   30150 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 20:17:51.254851   30150 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 20:17:51.254860   30150 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 20:17:51.254871   30150 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 20:17:51.254885   30150 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 20:17:51.254897   30150 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 20:17:51.254904   30150 command_runner.go:130] > # stream_port = "0"
	I1205 20:17:51.254917   30150 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 20:17:51.254926   30150 command_runner.go:130] > # stream_enable_tls = false
	I1205 20:17:51.254932   30150 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 20:17:51.254939   30150 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 20:17:51.254945   30150 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 20:17:51.254954   30150 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 20:17:51.254965   30150 command_runner.go:130] > # minutes.
	I1205 20:17:51.254976   30150 command_runner.go:130] > # stream_tls_cert = ""
	I1205 20:17:51.254986   30150 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 20:17:51.255007   30150 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 20:17:51.255017   30150 command_runner.go:130] > # stream_tls_key = ""
	I1205 20:17:51.255030   30150 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 20:17:51.255041   30150 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 20:17:51.255049   30150 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 20:17:51.255059   30150 command_runner.go:130] > # stream_tls_ca = ""
	I1205 20:17:51.255075   30150 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:17:51.255087   30150 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 20:17:51.255102   30150 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1205 20:17:51.255113   30150 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 20:17:51.255151   30150 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 20:17:51.255166   30150 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 20:17:51.255173   30150 command_runner.go:130] > [crio.runtime]
	I1205 20:17:51.255186   30150 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 20:17:51.255199   30150 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 20:17:51.255207   30150 command_runner.go:130] > # "nofile=1024:2048"
	I1205 20:17:51.255216   30150 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 20:17:51.255226   30150 command_runner.go:130] > # default_ulimits = [
	I1205 20:17:51.255236   30150 command_runner.go:130] > # ]
	I1205 20:17:51.255249   30150 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 20:17:51.255259   30150 command_runner.go:130] > # no_pivot = false
	I1205 20:17:51.255272   30150 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 20:17:51.255285   30150 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 20:17:51.255295   30150 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 20:17:51.255304   30150 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 20:17:51.255316   30150 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 20:17:51.255330   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:17:51.255342   30150 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 20:17:51.255352   30150 command_runner.go:130] > # Cgroup setting for conmon
	I1205 20:17:51.255366   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 20:17:51.255376   30150 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 20:17:51.255385   30150 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 20:17:51.255396   30150 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 20:17:51.255411   30150 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 20:17:51.255421   30150 command_runner.go:130] > conmon_env = [
	I1205 20:17:51.255434   30150 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 20:17:51.255443   30150 command_runner.go:130] > ]
	I1205 20:17:51.255452   30150 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 20:17:51.255464   30150 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 20:17:51.255470   30150 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 20:17:51.255474   30150 command_runner.go:130] > # default_env = [
	I1205 20:17:51.255480   30150 command_runner.go:130] > # ]
	I1205 20:17:51.255494   30150 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 20:17:51.255505   30150 command_runner.go:130] > # selinux = false
	I1205 20:17:51.255518   30150 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 20:17:51.255531   30150 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 20:17:51.255541   30150 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 20:17:51.255550   30150 command_runner.go:130] > # seccomp_profile = ""
	I1205 20:17:51.255556   30150 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 20:17:51.255567   30150 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 20:17:51.255581   30150 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 20:17:51.255593   30150 command_runner.go:130] > # which might increase security.
	I1205 20:17:51.255604   30150 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 20:17:51.255617   30150 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 20:17:51.255627   30150 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 20:17:51.255639   30150 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 20:17:51.255650   30150 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 20:17:51.255661   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:17:51.255673   30150 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 20:17:51.255686   30150 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 20:17:51.255693   30150 command_runner.go:130] > # the cgroup blockio controller.
	I1205 20:17:51.255704   30150 command_runner.go:130] > # blockio_config_file = ""
	I1205 20:17:51.255715   30150 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 20:17:51.255723   30150 command_runner.go:130] > # irqbalance daemon.
	I1205 20:17:51.255729   30150 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 20:17:51.255742   30150 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 20:17:51.255754   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:17:51.255762   30150 command_runner.go:130] > # rdt_config_file = ""
	I1205 20:17:51.255773   30150 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 20:17:51.255782   30150 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 20:17:51.255792   30150 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 20:17:51.255802   30150 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 20:17:51.255812   30150 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 20:17:51.255824   30150 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 20:17:51.255834   30150 command_runner.go:130] > # will be added.
	I1205 20:17:51.255841   30150 command_runner.go:130] > # default_capabilities = [
	I1205 20:17:51.255850   30150 command_runner.go:130] > # 	"CHOWN",
	I1205 20:17:51.255862   30150 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 20:17:51.255871   30150 command_runner.go:130] > # 	"FSETID",
	I1205 20:17:51.255879   30150 command_runner.go:130] > # 	"FOWNER",
	I1205 20:17:51.255889   30150 command_runner.go:130] > # 	"SETGID",
	I1205 20:17:51.255896   30150 command_runner.go:130] > # 	"SETUID",
	I1205 20:17:51.255906   30150 command_runner.go:130] > # 	"SETPCAP",
	I1205 20:17:51.255912   30150 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 20:17:51.255919   30150 command_runner.go:130] > # 	"KILL",
	I1205 20:17:51.255922   30150 command_runner.go:130] > # ]
	I1205 20:17:51.255931   30150 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 20:17:51.255937   30150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:17:51.255944   30150 command_runner.go:130] > # default_sysctls = [
	I1205 20:17:51.255948   30150 command_runner.go:130] > # ]
	I1205 20:17:51.255955   30150 command_runner.go:130] > # List of devices on the host that a
	I1205 20:17:51.255961   30150 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 20:17:51.255968   30150 command_runner.go:130] > # allowed_devices = [
	I1205 20:17:51.255972   30150 command_runner.go:130] > # 	"/dev/fuse",
	I1205 20:17:51.255978   30150 command_runner.go:130] > # ]
	I1205 20:17:51.255983   30150 command_runner.go:130] > # List of additional devices. specified as
	I1205 20:17:51.255993   30150 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 20:17:51.255998   30150 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 20:17:51.256016   30150 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 20:17:51.256023   30150 command_runner.go:130] > # additional_devices = [
	I1205 20:17:51.256026   30150 command_runner.go:130] > # ]
	I1205 20:17:51.256031   30150 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 20:17:51.256036   30150 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 20:17:51.256040   30150 command_runner.go:130] > # 	"/etc/cdi",
	I1205 20:17:51.256047   30150 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 20:17:51.256050   30150 command_runner.go:130] > # ]
	I1205 20:17:51.256056   30150 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 20:17:51.256064   30150 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 20:17:51.256069   30150 command_runner.go:130] > # Defaults to false.
	I1205 20:17:51.256075   30150 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 20:17:51.256082   30150 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 20:17:51.256091   30150 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 20:17:51.256095   30150 command_runner.go:130] > # hooks_dir = [
	I1205 20:17:51.256100   30150 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 20:17:51.256104   30150 command_runner.go:130] > # ]
	I1205 20:17:51.256110   30150 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 20:17:51.256119   30150 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 20:17:51.256124   30150 command_runner.go:130] > # its default mounts from the following two files:
	I1205 20:17:51.256130   30150 command_runner.go:130] > #
	I1205 20:17:51.256140   30150 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 20:17:51.256149   30150 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 20:17:51.256156   30150 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 20:17:51.256162   30150 command_runner.go:130] > #
	I1205 20:17:51.256168   30150 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 20:17:51.256177   30150 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 20:17:51.256184   30150 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 20:17:51.256191   30150 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 20:17:51.256195   30150 command_runner.go:130] > #
	I1205 20:17:51.256199   30150 command_runner.go:130] > # default_mounts_file = ""
	I1205 20:17:51.256207   30150 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 20:17:51.256214   30150 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 20:17:51.256220   30150 command_runner.go:130] > pids_limit = 1024
	I1205 20:17:51.256226   30150 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 20:17:51.256235   30150 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 20:17:51.256241   30150 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 20:17:51.256252   30150 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 20:17:51.256258   30150 command_runner.go:130] > # log_size_max = -1
	I1205 20:17:51.256265   30150 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1205 20:17:51.256271   30150 command_runner.go:130] > # log_to_journald = false
	I1205 20:17:51.256278   30150 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 20:17:51.256285   30150 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 20:17:51.256290   30150 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 20:17:51.256298   30150 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 20:17:51.256303   30150 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 20:17:51.256310   30150 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 20:17:51.256315   30150 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 20:17:51.256322   30150 command_runner.go:130] > # read_only = false
	I1205 20:17:51.256328   30150 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 20:17:51.256336   30150 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 20:17:51.256341   30150 command_runner.go:130] > # live configuration reload.
	I1205 20:17:51.256347   30150 command_runner.go:130] > # log_level = "info"
	I1205 20:17:51.256353   30150 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 20:17:51.256361   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:17:51.256365   30150 command_runner.go:130] > # log_filter = ""
	I1205 20:17:51.256373   30150 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 20:17:51.256380   30150 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 20:17:51.256386   30150 command_runner.go:130] > # separated by comma.
	I1205 20:17:51.256390   30150 command_runner.go:130] > # uid_mappings = ""
	I1205 20:17:51.256397   30150 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 20:17:51.256403   30150 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 20:17:51.256409   30150 command_runner.go:130] > # separated by comma.
	I1205 20:17:51.256413   30150 command_runner.go:130] > # gid_mappings = ""
	I1205 20:17:51.256422   30150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 20:17:51.256431   30150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:17:51.256439   30150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:17:51.256444   30150 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 20:17:51.256450   30150 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 20:17:51.256458   30150 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 20:17:51.256466   30150 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 20:17:51.256473   30150 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 20:17:51.256479   30150 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 20:17:51.256487   30150 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 20:17:51.256495   30150 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 20:17:51.256501   30150 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 20:17:51.256507   30150 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 20:17:51.256515   30150 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 20:17:51.256522   30150 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 20:17:51.256528   30150 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 20:17:51.256535   30150 command_runner.go:130] > drop_infra_ctr = false
	I1205 20:17:51.256544   30150 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 20:17:51.256550   30150 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 20:17:51.256559   30150 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 20:17:51.256565   30150 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 20:17:51.256571   30150 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 20:17:51.256578   30150 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 20:17:51.256583   30150 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 20:17:51.256592   30150 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 20:17:51.256600   30150 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 20:17:51.256609   30150 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 20:17:51.256615   30150 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1205 20:17:51.256623   30150 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1205 20:17:51.256628   30150 command_runner.go:130] > # default_runtime = "runc"
	I1205 20:17:51.256634   30150 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 20:17:51.256644   30150 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 20:17:51.256654   30150 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1205 20:17:51.256662   30150 command_runner.go:130] > # creation as a file is not desired either.
	I1205 20:17:51.256671   30150 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 20:17:51.256678   30150 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 20:17:51.256683   30150 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 20:17:51.256688   30150 command_runner.go:130] > # ]
	I1205 20:17:51.256694   30150 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 20:17:51.256700   30150 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 20:17:51.256718   30150 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1205 20:17:51.256726   30150 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1205 20:17:51.256739   30150 command_runner.go:130] > #
	I1205 20:17:51.256750   30150 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1205 20:17:51.256759   30150 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1205 20:17:51.256769   30150 command_runner.go:130] > #  runtime_type = "oci"
	I1205 20:17:51.256777   30150 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1205 20:17:51.256788   30150 command_runner.go:130] > #  privileged_without_host_devices = false
	I1205 20:17:51.256798   30150 command_runner.go:130] > #  allowed_annotations = []
	I1205 20:17:51.256803   30150 command_runner.go:130] > # Where:
	I1205 20:17:51.256815   30150 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1205 20:17:51.256828   30150 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1205 20:17:51.256841   30150 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 20:17:51.256854   30150 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 20:17:51.256862   30150 command_runner.go:130] > #   in $PATH.
	I1205 20:17:51.256874   30150 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1205 20:17:51.256884   30150 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 20:17:51.256893   30150 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1205 20:17:51.256899   30150 command_runner.go:130] > #   state.
	I1205 20:17:51.256905   30150 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 20:17:51.256914   30150 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 20:17:51.256924   30150 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 20:17:51.256932   30150 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 20:17:51.256940   30150 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 20:17:51.256947   30150 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 20:17:51.256955   30150 command_runner.go:130] > #   The currently recognized values are:
	I1205 20:17:51.256964   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 20:17:51.256972   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 20:17:51.256980   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 20:17:51.256987   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 20:17:51.256996   30150 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 20:17:51.257003   30150 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 20:17:51.257012   30150 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 20:17:51.257018   30150 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1205 20:17:51.257026   30150 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 20:17:51.257030   30150 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 20:17:51.257035   30150 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 20:17:51.257042   30150 command_runner.go:130] > runtime_type = "oci"
	I1205 20:17:51.257046   30150 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 20:17:51.257052   30150 command_runner.go:130] > runtime_config_path = ""
	I1205 20:17:51.257056   30150 command_runner.go:130] > monitor_path = ""
	I1205 20:17:51.257062   30150 command_runner.go:130] > monitor_cgroup = ""
	I1205 20:17:51.257066   30150 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 20:17:51.257072   30150 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1205 20:17:51.257079   30150 command_runner.go:130] > # running containers
	I1205 20:17:51.257083   30150 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1205 20:17:51.257090   30150 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1205 20:17:51.257116   30150 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1205 20:17:51.257125   30150 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1205 20:17:51.257130   30150 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1205 20:17:51.257139   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1205 20:17:51.257145   30150 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1205 20:17:51.257151   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1205 20:17:51.257158   30150 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1205 20:17:51.257163   30150 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1205 20:17:51.257171   30150 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 20:17:51.257179   30150 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 20:17:51.257188   30150 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 20:17:51.257197   30150 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 20:17:51.257208   30150 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 20:17:51.257216   30150 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 20:17:51.257225   30150 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 20:17:51.257236   30150 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 20:17:51.257244   30150 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 20:17:51.257254   30150 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 20:17:51.257260   30150 command_runner.go:130] > # Example:
	I1205 20:17:51.257265   30150 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 20:17:51.257272   30150 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 20:17:51.257281   30150 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 20:17:51.257286   30150 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 20:17:51.257293   30150 command_runner.go:130] > # cpuset = 0
	I1205 20:17:51.257297   30150 command_runner.go:130] > # cpushares = "0-1"
	I1205 20:17:51.257303   30150 command_runner.go:130] > # Where:
	I1205 20:17:51.257308   30150 command_runner.go:130] > # The workload name is workload-type.
	I1205 20:17:51.257314   30150 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 20:17:51.257322   30150 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 20:17:51.257328   30150 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 20:17:51.257339   30150 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 20:17:51.257347   30150 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 20:17:51.257352   30150 command_runner.go:130] > # 
	I1205 20:17:51.257359   30150 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 20:17:51.257364   30150 command_runner.go:130] > #
	I1205 20:17:51.257371   30150 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 20:17:51.257378   30150 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 20:17:51.257387   30150 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 20:17:51.257396   30150 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 20:17:51.257404   30150 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 20:17:51.257410   30150 command_runner.go:130] > [crio.image]
	I1205 20:17:51.257416   30150 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 20:17:51.257423   30150 command_runner.go:130] > # default_transport = "docker://"
	I1205 20:17:51.257429   30150 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 20:17:51.257437   30150 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:17:51.257444   30150 command_runner.go:130] > # global_auth_file = ""
	I1205 20:17:51.257450   30150 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 20:17:51.257457   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:17:51.257464   30150 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1205 20:17:51.257470   30150 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 20:17:51.257478   30150 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 20:17:51.257484   30150 command_runner.go:130] > # This option supports live configuration reload.
	I1205 20:17:51.257491   30150 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 20:17:51.257497   30150 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 20:17:51.257506   30150 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 20:17:51.257514   30150 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 20:17:51.257522   30150 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 20:17:51.257529   30150 command_runner.go:130] > # pause_command = "/pause"
	I1205 20:17:51.257538   30150 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 20:17:51.257547   30150 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 20:17:51.257556   30150 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 20:17:51.257565   30150 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 20:17:51.257573   30150 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 20:17:51.257578   30150 command_runner.go:130] > # signature_policy = ""
	I1205 20:17:51.257587   30150 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 20:17:51.257595   30150 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 20:17:51.257602   30150 command_runner.go:130] > # changing them here.
	I1205 20:17:51.257606   30150 command_runner.go:130] > # insecure_registries = [
	I1205 20:17:51.257612   30150 command_runner.go:130] > # ]
	I1205 20:17:51.257620   30150 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 20:17:51.257627   30150 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 20:17:51.257632   30150 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 20:17:51.257639   30150 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 20:17:51.257643   30150 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 20:17:51.257651   30150 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 20:17:51.257656   30150 command_runner.go:130] > # CNI plugins.
	I1205 20:17:51.257660   30150 command_runner.go:130] > [crio.network]
	I1205 20:17:51.257668   30150 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 20:17:51.257676   30150 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 20:17:51.257681   30150 command_runner.go:130] > # cni_default_network = ""
	I1205 20:17:51.257688   30150 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 20:17:51.257693   30150 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 20:17:51.257701   30150 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 20:17:51.257708   30150 command_runner.go:130] > # plugin_dirs = [
	I1205 20:17:51.257712   30150 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 20:17:51.257717   30150 command_runner.go:130] > # ]
	I1205 20:17:51.257723   30150 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 20:17:51.257730   30150 command_runner.go:130] > [crio.metrics]
	I1205 20:17:51.257735   30150 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 20:17:51.257742   30150 command_runner.go:130] > enable_metrics = true
	I1205 20:17:51.257747   30150 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 20:17:51.257753   30150 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 20:17:51.257760   30150 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 20:17:51.257768   30150 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 20:17:51.257776   30150 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 20:17:51.257783   30150 command_runner.go:130] > # metrics_collectors = [
	I1205 20:17:51.257787   30150 command_runner.go:130] > # 	"operations",
	I1205 20:17:51.257794   30150 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 20:17:51.257798   30150 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 20:17:51.257805   30150 command_runner.go:130] > # 	"operations_errors",
	I1205 20:17:51.257809   30150 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 20:17:51.257815   30150 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 20:17:51.257820   30150 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 20:17:51.257827   30150 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 20:17:51.257831   30150 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 20:17:51.257838   30150 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 20:17:51.257842   30150 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 20:17:51.257848   30150 command_runner.go:130] > # 	"containers_oom_total",
	I1205 20:17:51.257852   30150 command_runner.go:130] > # 	"containers_oom",
	I1205 20:17:51.257859   30150 command_runner.go:130] > # 	"processes_defunct",
	I1205 20:17:51.257863   30150 command_runner.go:130] > # 	"operations_total",
	I1205 20:17:51.257870   30150 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 20:17:51.257875   30150 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 20:17:51.257881   30150 command_runner.go:130] > # 	"operations_errors_total",
	I1205 20:17:51.257886   30150 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 20:17:51.257893   30150 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 20:17:51.257897   30150 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 20:17:51.257904   30150 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 20:17:51.257910   30150 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 20:17:51.257917   30150 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 20:17:51.257920   30150 command_runner.go:130] > # ]
	I1205 20:17:51.257927   30150 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 20:17:51.257933   30150 command_runner.go:130] > # metrics_port = 9090
	I1205 20:17:51.257939   30150 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 20:17:51.257945   30150 command_runner.go:130] > # metrics_socket = ""
	I1205 20:17:51.257950   30150 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 20:17:51.257959   30150 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 20:17:51.257967   30150 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 20:17:51.257974   30150 command_runner.go:130] > # certificate on any modification event.
	I1205 20:17:51.257978   30150 command_runner.go:130] > # metrics_cert = ""
	I1205 20:17:51.257985   30150 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 20:17:51.257990   30150 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 20:17:51.257996   30150 command_runner.go:130] > # metrics_key = ""
	I1205 20:17:51.258002   30150 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 20:17:51.258008   30150 command_runner.go:130] > [crio.tracing]
	I1205 20:17:51.258014   30150 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 20:17:51.258020   30150 command_runner.go:130] > # enable_tracing = false
	I1205 20:17:51.258025   30150 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 20:17:51.258032   30150 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 20:17:51.258037   30150 command_runner.go:130] > # Number of samples to collect per million spans.
	I1205 20:17:51.258044   30150 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 20:17:51.258050   30150 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 20:17:51.258056   30150 command_runner.go:130] > [crio.stats]
	I1205 20:17:51.258062   30150 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 20:17:51.258070   30150 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 20:17:51.258077   30150 command_runner.go:130] > # stats_collection_period = 0
	I1205 20:17:51.258132   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:17:51.258145   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:17:51.258153   30150 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:17:51.258168   30150 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-558947 NodeName:multinode-558947-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:17:51.258263   30150 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-558947-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:17:51.258332   30150 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-558947-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:17:51.258384   30150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:17:51.268212   30150 command_runner.go:130] > kubeadm
	I1205 20:17:51.268234   30150 command_runner.go:130] > kubectl
	I1205 20:17:51.268240   30150 command_runner.go:130] > kubelet
	I1205 20:17:51.268294   30150 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:17:51.268361   30150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1205 20:17:51.277250   30150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 20:17:51.294956   30150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:17:51.321125   30150 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I1205 20:17:51.325280   30150 command_runner.go:130] > 192.168.39.3	control-plane.minikube.internal
	I1205 20:17:51.325559   30150 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:17:51.325821   30150 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:17:51.325879   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:51.325922   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:51.340194   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I1205 20:17:51.340685   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:51.341150   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:17:51.341176   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:51.341526   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:51.341760   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:17:51.341923   30150 start.go:304] JoinCluster: &{Name:multinode-558947 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-558947 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.10 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:17:51.342025   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:17:51.342040   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:17:51.344590   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:17:51.344998   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:17:51.345026   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:17:51.345184   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:17:51.345360   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:17:51.345521   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:17:51.345690   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:17:51.524096   30150 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token n4xtyw.uxz0t8s3839tqovr --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:17:51.526307   30150 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1205 20:17:51.526351   30150 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:17:51.526726   30150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:17:51.526775   30150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:17:51.541454   30150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
	I1205 20:17:51.541889   30150 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:17:51.542391   30150 main.go:141] libmachine: Using API Version  1
	I1205 20:17:51.542419   30150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:17:51.542721   30150 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:17:51.542966   30150 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:17:51.543172   30150 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-558947-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1205 20:17:51.543197   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:17:51.546128   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:17:51.546595   30150 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:17:51.546622   30150 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:17:51.546801   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:17:51.546961   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:17:51.547143   30150 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:17:51.547348   30150 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:17:51.739639   30150 command_runner.go:130] > node/multinode-558947-m03 cordoned
	I1205 20:17:54.786222   30150 command_runner.go:130] > pod "busybox-5bc68d56bd-bxtwv" has DeletionTimestamp older than 1 seconds, skipping
	I1205 20:17:54.786246   30150 command_runner.go:130] > node/multinode-558947-m03 drained
	I1205 20:17:54.788245   30150 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1205 20:17:54.788269   30150 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-7dnjd, kube-system/kube-proxy-xvjj7
	I1205 20:17:54.788293   30150 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-558947-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.245099544s)
	I1205 20:17:54.788303   30150 node.go:108] successfully drained node "m03"
	I1205 20:17:54.788647   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:17:54.788887   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:17:54.789184   30150 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1205 20:17:54.789228   30150 round_trippers.go:463] DELETE https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:17:54.789234   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:54.789248   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:54.789258   30150 round_trippers.go:473]     Content-Type: application/json
	I1205 20:17:54.789269   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:54.801476   30150 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1205 20:17:54.801506   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:54.801515   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:54 GMT
	I1205 20:17:54.801520   30150 round_trippers.go:580]     Audit-Id: 5d29007d-6e51-4382-9133-dd6e2f669e41
	I1205 20:17:54.801525   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:54.801531   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:54.801536   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:54.801543   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:54.801550   30150 round_trippers.go:580]     Content-Length: 171
	I1205 20:17:54.801580   30150 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-558947-m03","kind":"nodes","uid":"b3bc91db-0091-4e00-86c1-c071017fca0a"}}
	I1205 20:17:54.801612   30150 node.go:124] successfully deleted node "m03"
	I1205 20:17:54.801623   30150 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1205 20:17:54.801640   30150 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1205 20:17:54.801661   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n4xtyw.uxz0t8s3839tqovr --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-558947-m03"
	I1205 20:17:54.855415   30150 command_runner.go:130] > [preflight] Running pre-flight checks
	I1205 20:17:55.033445   30150 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1205 20:17:55.033472   30150 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1205 20:17:55.099154   30150 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:17:55.099183   30150 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:17:55.099586   30150 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1205 20:17:55.259023   30150 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1205 20:17:55.787492   30150 command_runner.go:130] > This node has joined the cluster:
	I1205 20:17:55.787519   30150 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1205 20:17:55.787527   30150 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1205 20:17:55.787536   30150 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1205 20:17:55.790336   30150 command_runner.go:130] ! W1205 20:17:54.847376    2369 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1205 20:17:55.790367   30150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1205 20:17:55.790379   30150 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1205 20:17:55.790392   30150 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1205 20:17:55.790424   30150 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:17:56.040325   30150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=multinode-558947 minikube.k8s.io/updated_at=2023_12_05T20_17_56_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:17:56.170353   30150 command_runner.go:130] > node/multinode-558947-m02 labeled
	I1205 20:17:56.170383   30150 command_runner.go:130] > node/multinode-558947-m03 labeled
	I1205 20:17:56.170558   30150 start.go:306] JoinCluster complete in 4.82862978s
	I1205 20:17:56.170582   30150 cni.go:84] Creating CNI manager for ""
	I1205 20:17:56.170587   30150 cni.go:136] 3 nodes found, recommending kindnet
	I1205 20:17:56.170645   30150 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:17:56.178289   30150 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1205 20:17:56.178319   30150 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1205 20:17:56.178329   30150 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1205 20:17:56.178346   30150 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 20:17:56.178356   30150 command_runner.go:130] > Access: 2023-12-05 20:13:42.646892622 +0000
	I1205 20:17:56.178374   30150 command_runner.go:130] > Modify: 2023-12-01 05:15:19.000000000 +0000
	I1205 20:17:56.178387   30150 command_runner.go:130] > Change: 2023-12-05 20:13:40.685892622 +0000
	I1205 20:17:56.178399   30150 command_runner.go:130] >  Birth: -
	I1205 20:17:56.178532   30150 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1205 20:17:56.178551   30150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1205 20:17:56.203247   30150 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:17:56.584991   30150 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:17:56.591576   30150 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1205 20:17:56.598116   30150 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1205 20:17:56.612230   30150 command_runner.go:130] > daemonset.apps/kindnet configured
	I1205 20:17:56.615758   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:17:56.615964   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:17:56.616222   30150 round_trippers.go:463] GET https://192.168.39.3:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1205 20:17:56.616235   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.616242   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.616248   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.619135   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.619159   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.619166   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.619177   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.619182   30150 round_trippers.go:580]     Content-Length: 291
	I1205 20:17:56.619188   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.619197   30150 round_trippers.go:580]     Audit-Id: 09df2354-b0af-4b62-b3c8-8cb3c06a3038
	I1205 20:17:56.619208   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.619216   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.619246   30150 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94155912-31e3-4327-a529-cb135b43e314","resourceVersion":"909","creationTimestamp":"2023-12-05T20:03:56Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1205 20:17:56.619362   30150 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-558947" context rescaled to 1 replicas
	I1205 20:17:56.619399   30150 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.248 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1205 20:17:56.622356   30150 out.go:177] * Verifying Kubernetes components...
	I1205 20:17:56.623809   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:17:56.638397   30150 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:17:56.638627   30150 kapi.go:59] client config for multinode-558947: &rest.Config{Host:"https://192.168.39.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/multinode-558947/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:17:56.638885   30150 node_ready.go:35] waiting up to 6m0s for node "multinode-558947-m03" to be "Ready" ...
	I1205 20:17:56.638943   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:17:56.638951   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.638958   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.638964   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.641995   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:56.642015   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.642022   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.642028   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.642033   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.642038   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.642044   30150 round_trippers.go:580]     Audit-Id: 816a9548-e0a7-4f2e-bbdc-cf4d2df73676
	I1205 20:17:56.642050   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.642502   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m03","uid":"d22ae828-3b5d-4214-bed9-a53cc4d7e9ca","resourceVersion":"1237","creationTimestamp":"2023-12-05T20:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_17_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:17:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1205 20:17:56.642760   30150 node_ready.go:49] node "multinode-558947-m03" has status "Ready":"True"
	I1205 20:17:56.642774   30150 node_ready.go:38] duration metric: took 3.873045ms waiting for node "multinode-558947-m03" to be "Ready" ...
	I1205 20:17:56.642782   30150 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:17:56.642834   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods
	I1205 20:17:56.642844   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.642851   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.642857   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.646460   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:56.646481   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.646488   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.646493   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.646499   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.646504   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.646508   30150 round_trippers.go:580]     Audit-Id: dbff73d6-ffc3-44a3-8955-ab4fc6af039a
	I1205 20:17:56.646514   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.647864   30150 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81999 chars]
	I1205 20:17:56.650412   30150 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.650505   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-knl4d
	I1205 20:17:56.650517   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.650525   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.650533   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.653167   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.653186   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.653193   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.653199   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.653205   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.653211   30150 round_trippers.go:580]     Audit-Id: a91c9049-ebf3-4885-a6eb-17c86b2bb321
	I1205 20:17:56.653219   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.653237   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.653347   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-knl4d","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"28d6c367-593c-469a-90c6-b3c13cedc3df","resourceVersion":"905","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0378384d-16eb-4564-b3cd-7a6938ee7a9d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0378384d-16eb-4564-b3cd-7a6938ee7a9d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1205 20:17:56.653786   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:56.653799   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.653806   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.653812   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.656353   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.656375   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.656385   30150 round_trippers.go:580]     Audit-Id: 8a26ab4f-a342-4327-835f-995f70ddfbed
	I1205 20:17:56.656394   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.656401   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.656406   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.656411   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.656416   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.656788   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:17:56.657276   30150 pod_ready.go:92] pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:56.657299   30150 pod_ready.go:81] duration metric: took 6.859815ms waiting for pod "coredns-5dd5756b68-knl4d" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.657311   30150 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.657375   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-558947
	I1205 20:17:56.657386   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.657396   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.657406   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.659746   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.659765   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.659772   30150 round_trippers.go:580]     Audit-Id: 5634111e-78a2-438c-ab94-9261f33f73c7
	I1205 20:17:56.659778   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.659784   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.659791   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.659796   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.659802   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.659925   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-558947","namespace":"kube-system","uid":"118e2032-1898-42c0-9aa2-3f15356e9ff3","resourceVersion":"895","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.3:2379","kubernetes.io/config.hash":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.mirror":"d17798ae1d41feb30e7640ec43442332","kubernetes.io/config.seen":"2023-12-05T20:03:56.146034017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1205 20:17:56.660340   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:56.660355   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.660366   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.660376   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.662538   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.662554   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.662560   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.662566   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.662571   30150 round_trippers.go:580]     Audit-Id: 208a4a8e-41d5-4c9a-8aae-5b77464897c4
	I1205 20:17:56.662576   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.662581   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.662586   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.662847   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:17:56.663239   30150 pod_ready.go:92] pod "etcd-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:56.663257   30150 pod_ready.go:81] duration metric: took 5.937473ms waiting for pod "etcd-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.663279   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.663343   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-558947
	I1205 20:17:56.663354   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.663365   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.663375   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.665508   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.665523   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.665529   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.665534   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.665539   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.665544   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.665549   30150 round_trippers.go:580]     Audit-Id: 40480566-ee30-4f31-adde-af92fce3fdc6
	I1205 20:17:56.665554   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.665676   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-558947","namespace":"kube-system","uid":"36300192-b165-4bee-b791-9fce329428f9","resourceVersion":"871","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.3:8443","kubernetes.io/config.hash":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.mirror":"0a38ef6c4499d9729cedfe70dc9f6984","kubernetes.io/config.seen":"2023-12-05T20:03:56.146037812Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7371 chars]
	I1205 20:17:56.666141   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:56.666158   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.666171   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.666181   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.668644   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:56.668666   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.668676   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.668685   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.668693   30150 round_trippers.go:580]     Audit-Id: ac2b02f5-eb8e-4736-a206-8b6e8a0b6028
	I1205 20:17:56.668701   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.668708   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.668716   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.669018   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:17:56.669416   30150 pod_ready.go:92] pod "kube-apiserver-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:56.669437   30150 pod_ready.go:81] duration metric: took 6.147142ms waiting for pod "kube-apiserver-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.669450   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.669510   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-558947
	I1205 20:17:56.669522   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.669531   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.669539   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.673124   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:56.673150   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.673160   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.673168   30150 round_trippers.go:580]     Audit-Id: 8c9edc4b-6133-4a6f-b557-4f4916de4ba4
	I1205 20:17:56.673176   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.673184   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.673191   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.673203   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.673366   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-558947","namespace":"kube-system","uid":"49ee6fa8-b7cd-4880-b4db-a1717b685750","resourceVersion":"883","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.mirror":"d4039ac5faaadd6fc4a75accac6480b7","kubernetes.io/config.seen":"2023-12-05T20:03:56.146038937Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6946 chars]
	I1205 20:17:56.673966   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:56.673985   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.673993   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.674002   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.677258   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:56.677280   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.677287   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.677292   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.677297   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.677303   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.677311   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.677319   30150 round_trippers.go:580]     Audit-Id: 94f00e5c-7d1b-4db0-9c41-420cd727e055
	I1205 20:17:56.677485   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:17:56.677769   30150 pod_ready.go:92] pod "kube-controller-manager-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:56.677784   30150 pod_ready.go:81] duration metric: took 8.327522ms waiting for pod "kube-controller-manager-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.677792   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:56.839760   30150 request.go:629] Waited for 161.87259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:17:56.839833   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kjph8
	I1205 20:17:56.839839   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:56.839847   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:56.839853   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:56.843149   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:56.843174   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:56.843184   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:56.843192   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:56.843201   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:56 GMT
	I1205 20:17:56.843210   30150 round_trippers.go:580]     Audit-Id: 7c34cbff-68a6-4db7-9686-c40c9445a42c
	I1205 20:17:56.843217   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:56.843246   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:56.843361   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kjph8","generateName":"kube-proxy-","namespace":"kube-system","uid":"05167608-ef4c-4bac-b57b-0330ab4cef76","resourceVersion":"1081","creationTimestamp":"2023-12-05T20:04:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1205 20:17:57.039438   30150 request.go:629] Waited for 195.357994ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:17:57.039490   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m02
	I1205 20:17:57.039495   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:57.039503   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:57.039509   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:57.041588   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:57.041614   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:57.041621   30150 round_trippers.go:580]     Audit-Id: c7a7db71-099a-4d2b-ac2f-191d20349bfb
	I1205 20:17:57.041627   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:57.041634   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:57.041642   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:57.041650   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:57.041659   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:57 GMT
	I1205 20:17:57.041873   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m02","uid":"0402e7e4-4e77-4d49-9b99-eee89333fa24","resourceVersion":"1236","creationTimestamp":"2023-12-05T20:16:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_17_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:16:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I1205 20:17:57.042147   30150 pod_ready.go:92] pod "kube-proxy-kjph8" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:57.042160   30150 pod_ready.go:81] duration metric: took 364.362684ms waiting for pod "kube-proxy-kjph8" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:57.042168   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:57.239471   30150 request.go:629] Waited for 197.237756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:17:57.239544   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgmt2
	I1205 20:17:57.239552   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:57.239564   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:57.239577   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:57.243123   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:57.243143   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:57.243153   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:57.243162   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:57 GMT
	I1205 20:17:57.243170   30150 round_trippers.go:580]     Audit-Id: 45f5efbc-0155-4bd0-9915-567e8771cac1
	I1205 20:17:57.243178   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:57.243185   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:57.243192   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:57.243395   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgmt2","generateName":"kube-proxy-","namespace":"kube-system","uid":"41275cfd-cb0f-4886-b1bc-a86b7e20cc14","resourceVersion":"783","creationTimestamp":"2023-12-05T20:04:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:04:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1205 20:17:57.439176   30150 request.go:629] Waited for 195.197691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:57.439245   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:57.439264   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:57.439277   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:57.439286   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:57.442761   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:57.442790   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:57.442802   30150 round_trippers.go:580]     Audit-Id: 828042b5-b196-4724-b150-29b619c5bcc0
	I1205 20:17:57.442809   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:57.442818   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:57.442832   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:57.442844   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:57.442855   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:57 GMT
	I1205 20:17:57.443030   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:17:57.443466   30150 pod_ready.go:92] pod "kube-proxy-mgmt2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:57.443493   30150 pod_ready.go:81] duration metric: took 401.318399ms waiting for pod "kube-proxy-mgmt2" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:57.443506   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:57.639929   30150 request.go:629] Waited for 196.347463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:17:57.639983   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xvjj7
	I1205 20:17:57.639989   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:57.639996   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:57.640002   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:57.642879   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:57.642898   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:57.642904   30150 round_trippers.go:580]     Audit-Id: b082116e-00b2-4739-85b3-04ffca503fe4
	I1205 20:17:57.642910   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:57.642915   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:57.642920   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:57.642925   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:57.642930   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:57 GMT
	I1205 20:17:57.643274   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xvjj7","generateName":"kube-proxy-","namespace":"kube-system","uid":"19641919-0011-4726-b884-cc468d0f2dd0","resourceVersion":"1257","creationTimestamp":"2023-12-05T20:05:38Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:05:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a48e72c5-7be1-4c34-b9c1-f2e4b1b791a6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1205 20:17:57.838973   30150 request.go:629] Waited for 195.319844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:17:57.839041   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947-m03
	I1205 20:17:57.839046   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:57.839069   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:57.839075   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:57.844556   30150 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:17:57.844581   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:57.844589   30150 round_trippers.go:580]     Audit-Id: b27f3528-633f-489a-a1f3-169778f6670c
	I1205 20:17:57.844595   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:57.844600   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:57.844607   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:57.844614   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:57.844622   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:57 GMT
	I1205 20:17:57.844745   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947-m03","uid":"d22ae828-3b5d-4214-bed9-a53cc4d7e9ca","resourceVersion":"1237","creationTimestamp":"2023-12-05T20:17:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_05T20_17_56_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:17:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I1205 20:17:57.845131   30150 pod_ready.go:92] pod "kube-proxy-xvjj7" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:57.845161   30150 pod_ready.go:81] duration metric: took 401.646695ms waiting for pod "kube-proxy-xvjj7" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:57.845174   30150 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:58.039471   30150 request.go:629] Waited for 194.231457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:17:58.039537   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-558947
	I1205 20:17:58.039544   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:58.039553   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:58.039559   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:58.042472   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:58.042496   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:58.042504   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:58 GMT
	I1205 20:17:58.042509   30150 round_trippers.go:580]     Audit-Id: da1ebfa0-16bc-4b54-83f0-65aae372e9f0
	I1205 20:17:58.042515   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:58.042520   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:58.042528   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:58.042536   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:58.042717   30150 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-558947","namespace":"kube-system","uid":"526e311f-432f-4c9a-ad6e-19855cae55be","resourceVersion":"897","creationTimestamp":"2023-12-05T20:03:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.mirror":"fbb96bfe6bd490571ac773b3d4c70ba1","kubernetes.io/config.seen":"2023-12-05T20:03:56.146039635Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-05T20:03:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1205 20:17:58.239433   30150 request.go:629] Waited for 196.376246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:58.239505   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes/multinode-558947
	I1205 20:17:58.239510   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:58.239518   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:58.239524   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:58.242414   30150 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:17:58.242432   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:58.242439   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:58.242444   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:58 GMT
	I1205 20:17:58.242449   30150 round_trippers.go:580]     Audit-Id: 3e96f115-9529-4d61-af83-0fee8ca7d053
	I1205 20:17:58.242455   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:58.242460   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:58.242465   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:58.242731   30150 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-05T20:03:52Z","fieldsType":"FieldsV1","fiel [truncated 6211 chars]
	I1205 20:17:58.243077   30150 pod_ready.go:92] pod "kube-scheduler-multinode-558947" in "kube-system" namespace has status "Ready":"True"
	I1205 20:17:58.243097   30150 pod_ready.go:81] duration metric: took 397.913839ms waiting for pod "kube-scheduler-multinode-558947" in "kube-system" namespace to be "Ready" ...
	I1205 20:17:58.243106   30150 pod_ready.go:38] duration metric: took 1.600314659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:17:58.243119   30150 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:17:58.243178   30150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:17:58.256419   30150 system_svc.go:56] duration metric: took 13.29393ms WaitForService to wait for kubelet.
	I1205 20:17:58.256449   30150 kubeadm.go:581] duration metric: took 1.637020734s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:17:58.256479   30150 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:17:58.439867   30150 request.go:629] Waited for 183.321881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.3:8443/api/v1/nodes
	I1205 20:17:58.439927   30150 round_trippers.go:463] GET https://192.168.39.3:8443/api/v1/nodes
	I1205 20:17:58.439932   30150 round_trippers.go:469] Request Headers:
	I1205 20:17:58.439939   30150 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:17:58.439946   30150 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:17:58.443104   30150 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:17:58.443122   30150 round_trippers.go:577] Response Headers:
	I1205 20:17:58.443129   30150 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1d523845-e364-4c06-a541-3cc0a5ad6e71
	I1205 20:17:58.443135   30150 round_trippers.go:580]     Date: Tue, 05 Dec 2023 20:17:58 GMT
	I1205 20:17:58.443140   30150 round_trippers.go:580]     Audit-Id: c838dbd6-f0db-4c7e-b210-5e204cff5323
	I1205 20:17:58.443145   30150 round_trippers.go:580]     Cache-Control: no-cache, private
	I1205 20:17:58.443150   30150 round_trippers.go:580]     Content-Type: application/json
	I1205 20:17:58.443156   30150 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d0c732c-dec1-4e1c-b1fc-1bdee656b531
	I1205 20:17:58.443371   30150 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1261"},"items":[{"metadata":{"name":"multinode-558947","uid":"54c73bdd-3643-425d-be87-afa0f693f955","resourceVersion":"923","creationTimestamp":"2023-12-05T20:03:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-558947","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7b46af276bae825d70472f5e115d38eac802d728","minikube.k8s.io/name":"multinode-558947","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_05T20_03_57_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16236 chars]
	I1205 20:17:58.443982   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:17:58.444000   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:17:58.444010   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:17:58.444013   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:17:58.444017   30150 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:17:58.444021   30150 node_conditions.go:123] node cpu capacity is 2
	I1205 20:17:58.444024   30150 node_conditions.go:105] duration metric: took 187.541354ms to run NodePressure ...
	I1205 20:17:58.444033   30150 start.go:228] waiting for startup goroutines ...
	I1205 20:17:58.444049   30150 start.go:242] writing updated cluster config ...
	I1205 20:17:58.444293   30150 ssh_runner.go:195] Run: rm -f paused
	I1205 20:17:58.492924   30150 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:17:58.495696   30150 out.go:177] * Done! kubectl is now configured to use "multinode-558947" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:13:41 UTC, ends at Tue 2023-12-05 20:17:59 UTC. --
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.590584346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701807479590572073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ba36076c-803f-4056-9636-8fcf46f359c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.591243725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cab26959-60b3-455a-80eb-d47352769d98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.591286672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cab26959-60b3-455a-80eb-d47352769d98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.591469369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1e65184d2591db0600d18222f020ba7b91be8187f6d2b707a3e7f4681bc23f4,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701807286502279022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39135bd1e1fa59e447422bcd1dae102fc1380cb6e7c6453321bd17bdae876962,PodSandboxId:7656c38b5d07dd22ce6617527ef22deb046486b4a14b98b0bdea77b510400c40,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701807272122620554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dbc26cd82d0362d862ed719eccf2fbb67359c15c00a3dc2bbb53538b9ef72c,PodSandboxId:bd8095de039b599f797fc72fe45c4a6c527a339483409fabf2fe0b6e4a3ecefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701807270648250119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1c66e6b2e780fefd594ce0f1358eb0efce0820799dd1ba428336abf9f6c034,PodSandboxId:197c98fd27f8dfa2760b710b1eb68c54eb8ea0fb376dc7b549f19f7dbef1e54f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701807257837308479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff3991eaeee56a89c873851712d2b66966872592df008ad2dd92c0aaba1ec6d,PodSandboxId:ce34cebac1afa43e0eaa25e0a8e07807747bf0838b1178b63f56cf552564ec67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701807255331779723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537ab3e1198318a237ac23148b01f5af6f746fe0bc23d022e22aace896c7e4af,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701807255283018025,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144
038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc20ee7c4652ea4de91c752d6946e7cfaf7d1b250f3587bee6b5aaa2f49004ed,PodSandboxId:08db5dd8342a207abec3692d69b22e710f74fca111e5b98941517f3f4d38b368,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701807249024945031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 24713faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab3ab2d98f6d2c85fd3c50c92389da314f40638869c8ad2c7bd7cee55f4e0d83,PodSandboxId:488651d713ef1082737b9f27b7fd07eef2113556c91c63beb7fc225de5251ee0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701807248597119778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.container.hash
: 4d16373e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6504160555c141e965b82f835fdc9c2e1046c55ac022878dd12859cb3f39ae,PodSandboxId:c56a7c9b0f2639005355995b5a57229f04fd063d5ccab6629ff6edc73088fcbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701807248480226492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733ef0e1895385a5aa71185eafbc70fa2012924aaff15fbc6e6669afecc52257,PodSandboxId:fa3594c27df353ce63979cb89c9fb72321aafff1523d90ea8605ec358847d1da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701807248307209069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cab26959-60b3-455a-80eb-d47352769d98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.631162321Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4a8df83b-6b41-4c5f-8c24-178b111a3053 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.631250856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4a8df83b-6b41-4c5f-8c24-178b111a3053 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.632618836Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3ed935fd-1484-4e42-b439-396604711108 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.633212582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701807479633195978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3ed935fd-1484-4e42-b439-396604711108 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.633779974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=403775ee-8894-485f-add9-1636a48e43dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.633825898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=403775ee-8894-485f-add9-1636a48e43dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.634109634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1e65184d2591db0600d18222f020ba7b91be8187f6d2b707a3e7f4681bc23f4,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701807286502279022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39135bd1e1fa59e447422bcd1dae102fc1380cb6e7c6453321bd17bdae876962,PodSandboxId:7656c38b5d07dd22ce6617527ef22deb046486b4a14b98b0bdea77b510400c40,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701807272122620554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dbc26cd82d0362d862ed719eccf2fbb67359c15c00a3dc2bbb53538b9ef72c,PodSandboxId:bd8095de039b599f797fc72fe45c4a6c527a339483409fabf2fe0b6e4a3ecefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701807270648250119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1c66e6b2e780fefd594ce0f1358eb0efce0820799dd1ba428336abf9f6c034,PodSandboxId:197c98fd27f8dfa2760b710b1eb68c54eb8ea0fb376dc7b549f19f7dbef1e54f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701807257837308479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff3991eaeee56a89c873851712d2b66966872592df008ad2dd92c0aaba1ec6d,PodSandboxId:ce34cebac1afa43e0eaa25e0a8e07807747bf0838b1178b63f56cf552564ec67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701807255331779723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537ab3e1198318a237ac23148b01f5af6f746fe0bc23d022e22aace896c7e4af,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701807255283018025,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144
038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc20ee7c4652ea4de91c752d6946e7cfaf7d1b250f3587bee6b5aaa2f49004ed,PodSandboxId:08db5dd8342a207abec3692d69b22e710f74fca111e5b98941517f3f4d38b368,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701807249024945031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 24713faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab3ab2d98f6d2c85fd3c50c92389da314f40638869c8ad2c7bd7cee55f4e0d83,PodSandboxId:488651d713ef1082737b9f27b7fd07eef2113556c91c63beb7fc225de5251ee0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701807248597119778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.container.hash
: 4d16373e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6504160555c141e965b82f835fdc9c2e1046c55ac022878dd12859cb3f39ae,PodSandboxId:c56a7c9b0f2639005355995b5a57229f04fd063d5ccab6629ff6edc73088fcbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701807248480226492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733ef0e1895385a5aa71185eafbc70fa2012924aaff15fbc6e6669afecc52257,PodSandboxId:fa3594c27df353ce63979cb89c9fb72321aafff1523d90ea8605ec358847d1da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701807248307209069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=403775ee-8894-485f-add9-1636a48e43dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.677956587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3f7d3f9a-56ef-4efb-b07d-3d14b8f58c92 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.678048405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3f7d3f9a-56ef-4efb-b07d-3d14b8f58c92 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.679694069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fe9f9516-47f0-4930-9cf6-ae953f1d0696 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.680176638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701807479680163515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fe9f9516-47f0-4930-9cf6-ae953f1d0696 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.680841812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0e5b42ca-305d-4e57-958b-c7ccc5951fbf name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.680985439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0e5b42ca-305d-4e57-958b-c7ccc5951fbf name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.681196647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1e65184d2591db0600d18222f020ba7b91be8187f6d2b707a3e7f4681bc23f4,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701807286502279022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39135bd1e1fa59e447422bcd1dae102fc1380cb6e7c6453321bd17bdae876962,PodSandboxId:7656c38b5d07dd22ce6617527ef22deb046486b4a14b98b0bdea77b510400c40,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701807272122620554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dbc26cd82d0362d862ed719eccf2fbb67359c15c00a3dc2bbb53538b9ef72c,PodSandboxId:bd8095de039b599f797fc72fe45c4a6c527a339483409fabf2fe0b6e4a3ecefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701807270648250119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1c66e6b2e780fefd594ce0f1358eb0efce0820799dd1ba428336abf9f6c034,PodSandboxId:197c98fd27f8dfa2760b710b1eb68c54eb8ea0fb376dc7b549f19f7dbef1e54f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701807257837308479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff3991eaeee56a89c873851712d2b66966872592df008ad2dd92c0aaba1ec6d,PodSandboxId:ce34cebac1afa43e0eaa25e0a8e07807747bf0838b1178b63f56cf552564ec67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701807255331779723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537ab3e1198318a237ac23148b01f5af6f746fe0bc23d022e22aace896c7e4af,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701807255283018025,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144
038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc20ee7c4652ea4de91c752d6946e7cfaf7d1b250f3587bee6b5aaa2f49004ed,PodSandboxId:08db5dd8342a207abec3692d69b22e710f74fca111e5b98941517f3f4d38b368,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701807249024945031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 24713faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab3ab2d98f6d2c85fd3c50c92389da314f40638869c8ad2c7bd7cee55f4e0d83,PodSandboxId:488651d713ef1082737b9f27b7fd07eef2113556c91c63beb7fc225de5251ee0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701807248597119778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.container.hash
: 4d16373e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6504160555c141e965b82f835fdc9c2e1046c55ac022878dd12859cb3f39ae,PodSandboxId:c56a7c9b0f2639005355995b5a57229f04fd063d5ccab6629ff6edc73088fcbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701807248480226492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733ef0e1895385a5aa71185eafbc70fa2012924aaff15fbc6e6669afecc52257,PodSandboxId:fa3594c27df353ce63979cb89c9fb72321aafff1523d90ea8605ec358847d1da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701807248307209069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0e5b42ca-305d-4e57-958b-c7ccc5951fbf name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.719348048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=737080b7-8b4a-4d60-b172-d389d2e160b0 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.719433399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=737080b7-8b4a-4d60-b172-d389d2e160b0 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.720572537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cfbd0840-0f39-4644-9f92-6318354c12dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.721061343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701807479721047999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=cfbd0840-0f39-4644-9f92-6318354c12dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.721718437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c152fca-1b05-44f6-9532-526d93957a08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.721789585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c152fca-1b05-44f6-9532-526d93957a08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:17:59 multinode-558947 crio[711]: time="2023-12-05 20:17:59.722053533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d1e65184d2591db0600d18222f020ba7b91be8187f6d2b707a3e7f4681bc23f4,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701807286502279022,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39135bd1e1fa59e447422bcd1dae102fc1380cb6e7c6453321bd17bdae876962,PodSandboxId:7656c38b5d07dd22ce6617527ef22deb046486b4a14b98b0bdea77b510400c40,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701807272122620554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-6www8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 448efe43-2e13-4b86-9c87-090ece8e686e,},Annotations:map[string]string{io.kubernetes.container.hash: bff397a,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dbc26cd82d0362d862ed719eccf2fbb67359c15c00a3dc2bbb53538b9ef72c,PodSandboxId:bd8095de039b599f797fc72fe45c4a6c527a339483409fabf2fe0b6e4a3ecefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701807270648250119,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-knl4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6c367-593c-469a-90c6-b3c13cedc3df,},Annotations:map[string]string{io.kubernetes.container.hash: 481a1f44,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1c66e6b2e780fefd594ce0f1358eb0efce0820799dd1ba428336abf9f6c034,PodSandboxId:197c98fd27f8dfa2760b710b1eb68c54eb8ea0fb376dc7b549f19f7dbef1e54f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701807257837308479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cv76g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 88acd23e-99f5-4c5f-a03c-1c961a511eac,},Annotations:map[string]string{io.kubernetes.container.hash: d6303632,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff3991eaeee56a89c873851712d2b66966872592df008ad2dd92c0aaba1ec6d,PodSandboxId:ce34cebac1afa43e0eaa25e0a8e07807747bf0838b1178b63f56cf552564ec67,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701807255331779723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mgmt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41275cfd-cb0f-4886-b1bc-a86b7e2
0cc14,},Annotations:map[string]string{io.kubernetes.container.hash: aa754cca,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537ab3e1198318a237ac23148b01f5af6f746fe0bc23d022e22aace896c7e4af,PodSandboxId:a30621008891f937b817b910030bc40713d43235f0e151b45d481626550dbcbb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701807255283018025,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58d4c242-7ea5-49f5-999c-3c9135144
038,},Annotations:map[string]string{io.kubernetes.container.hash: 5d6c7314,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc20ee7c4652ea4de91c752d6946e7cfaf7d1b250f3587bee6b5aaa2f49004ed,PodSandboxId:08db5dd8342a207abec3692d69b22e710f74fca111e5b98941517f3f4d38b368,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701807249024945031,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d17798ae1d41feb30e7640ec43442332,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 24713faf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab3ab2d98f6d2c85fd3c50c92389da314f40638869c8ad2c7bd7cee55f4e0d83,PodSandboxId:488651d713ef1082737b9f27b7fd07eef2113556c91c63beb7fc225de5251ee0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701807248597119778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a38ef6c4499d9729cedfe70dc9f6984,},Annotations:map[string]string{io.kubernetes.container.hash
: 4d16373e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6504160555c141e965b82f835fdc9c2e1046c55ac022878dd12859cb3f39ae,PodSandboxId:c56a7c9b0f2639005355995b5a57229f04fd063d5ccab6629ff6edc73088fcbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701807248480226492,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbb96bfe6bd490571ac773b3d4c70ba1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733ef0e1895385a5aa71185eafbc70fa2012924aaff15fbc6e6669afecc52257,PodSandboxId:fa3594c27df353ce63979cb89c9fb72321aafff1523d90ea8605ec358847d1da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701807248307209069,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-558947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4039ac5faaadd6fc4a75accac6480b7,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c152fca-1b05-44f6-9532-526d93957a08 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d1e65184d2591       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   a30621008891f       storage-provisioner
	39135bd1e1fa5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   7656c38b5d07d       busybox-5bc68d56bd-6www8
	06dbc26cd82d0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   bd8095de039b5       coredns-5dd5756b68-knl4d
	2a1c66e6b2e78       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   197c98fd27f8d       kindnet-cv76g
	fff3991eaeee5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   ce34cebac1afa       kube-proxy-mgmt2
	537ab3e119831       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   a30621008891f       storage-provisioner
	bc20ee7c4652e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   08db5dd8342a2       etcd-multinode-558947
	ab3ab2d98f6d2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   488651d713ef1       kube-apiserver-multinode-558947
	8f6504160555c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   c56a7c9b0f263       kube-scheduler-multinode-558947
	733ef0e189538       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   fa3594c27df35       kube-controller-manager-multinode-558947
	
	* 
	* ==> coredns [06dbc26cd82d0362d862ed719eccf2fbb67359c15c00a3dc2bbb53538b9ef72c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33541 - 26418 "HINFO IN 8372845960506787856.4469806435216413437. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010408598s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-558947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-558947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-558947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_03_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:03:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-558947
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:17:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:14:44 +0000   Tue, 05 Dec 2023 20:03:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:14:44 +0000   Tue, 05 Dec 2023 20:03:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:14:44 +0000   Tue, 05 Dec 2023 20:03:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:14:44 +0000   Tue, 05 Dec 2023 20:14:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    multinode-558947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dc05fa07b7a45888dae1cab7c1f644b
	  System UUID:                2dc05fa0-7b7a-4588-8dae-1cab7c1f644b
	  Boot ID:                    7b97df4a-d93b-40c8-b1f7-ea4cda58e0f3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6www8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-knl4d                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-558947                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-cv76g                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-558947             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-558947    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-mgmt2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-558947             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-558947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-558947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-558947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-558947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-558947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-558947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-558947 event: Registered Node multinode-558947 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-558947 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node multinode-558947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node multinode-558947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node multinode-558947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-558947 event: Registered Node multinode-558947 in Controller
	
	
	Name:               multinode-558947-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-558947-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-558947
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_05T20_17_56_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:16:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-558947-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:17:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:16:14 +0000   Tue, 05 Dec 2023 20:16:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:16:14 +0000   Tue, 05 Dec 2023 20:16:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:16:14 +0000   Tue, 05 Dec 2023 20:16:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:16:14 +0000   Tue, 05 Dec 2023 20:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    multinode-558947-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 77915f8f899f4a91a25258a548ce6f37
	  System UUID:                77915f8f-899f-4a91-a252-58a548ce6f37
	  Boot ID:                    99d55947-ac81-4222-ac3e-9e89168a367c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-p8lhl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-xcs7j               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-kjph8            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 104s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-558947-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-558947-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-558947-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-558947-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m55s                  kubelet     Node multinode-558947-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m12s (x2 over 3m12s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet     Node multinode-558947-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet     Node multinode-558947-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet     Node multinode-558947-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet     Node multinode-558947-m02 status is now: NodeReady
	
	
	Name:               multinode-558947-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-558947-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=multinode-558947
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_05T20_17_56_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:17:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-558947-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:17:55 +0000   Tue, 05 Dec 2023 20:17:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:17:55 +0000   Tue, 05 Dec 2023 20:17:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:17:55 +0000   Tue, 05 Dec 2023 20:17:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:17:55 +0000   Tue, 05 Dec 2023 20:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    multinode-558947-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ae95a2da3ad41018632acedfd29629a
	  System UUID:                3ae95a2d-a3ad-4101-8632-acedfd29629a
	  Boot ID:                    0aee10fd-6707-424a-bed9-4f7624eab570
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-bxtwv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-7dnjd               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-xvjj7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 2s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-558947-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-558947-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-558947-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-558947-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-558947-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-558947-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-558947-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-558947-m03 status is now: NodeReady
	  Normal   NodeNotReady             74s                 kubelet     Node multinode-558947-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        41s (x2 over 101s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-558947-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-558947-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-558947-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-558947-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.064596] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.354989] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.421793] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150614] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.415558] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.043559] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.108802] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.144937] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.108013] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.219782] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[Dec 5 20:14] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[ +19.707450] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [bc20ee7c4652ea4de91c752d6946e7cfaf7d1b250f3587bee6b5aaa2f49004ed] <==
	* {"level":"info","ts":"2023-12-05T20:14:10.699669Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:14:10.696847Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-12-05T20:14:10.699039Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:14:10.700285Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:14:10.700327Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:14:10.699262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c switched to configuration voters=(12397538410003441052)"}
	{"level":"info","ts":"2023-12-05T20:14:10.700456Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","added-peer-id":"ac0ce77fb984259c","added-peer-peer-urls":["https://192.168.39.3:2380"]}
	{"level":"info","ts":"2023-12-05T20:14:10.700554Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:14:10.700598Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:14:10.699319Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2023-12-05T20:14:10.707125Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2023-12-05T20:14:12.446314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-05T20:14:12.446399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:14:12.446434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2023-12-05T20:14:12.446447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became candidate at term 3"}
	{"level":"info","ts":"2023-12-05T20:14:12.446453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgVoteResp from ac0ce77fb984259c at term 3"}
	{"level":"info","ts":"2023-12-05T20:14:12.446464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became leader at term 3"}
	{"level":"info","ts":"2023-12-05T20:14:12.44649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac0ce77fb984259c elected leader ac0ce77fb984259c at term 3"}
	{"level":"info","ts":"2023-12-05T20:14:12.448466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:14:12.449255Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ac0ce77fb984259c","local-member-attributes":"{Name:multinode-558947 ClientURLs:[https://192.168.39.3:2379]}","request-path":"/0/members/ac0ce77fb984259c/attributes","cluster-id":"1d030e9334923ef1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:14:12.449436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:14:12.449645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:14:12.450809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.3:2379"}
	{"level":"info","ts":"2023-12-05T20:14:12.450889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:14:12.451023Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:18:00 up 4 min,  0 users,  load average: 0.13, 0.24, 0.12
	Linux multinode-558947 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2a1c66e6b2e780fefd594ce0f1358eb0efce0820799dd1ba428336abf9f6c034] <==
	* I1205 20:17:29.709794       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:17:29.709976       1 main.go:227] handling current node
	I1205 20:17:29.710019       1 main.go:223] Handling node with IPs: map[192.168.39.10:{}]
	I1205 20:17:29.710054       1 main.go:250] Node multinode-558947-m02 has CIDR [10.244.1.0/24] 
	I1205 20:17:29.710190       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I1205 20:17:29.710211       1 main.go:250] Node multinode-558947-m03 has CIDR [10.244.3.0/24] 
	I1205 20:17:39.723624       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:17:39.723673       1 main.go:227] handling current node
	I1205 20:17:39.723684       1 main.go:223] Handling node with IPs: map[192.168.39.10:{}]
	I1205 20:17:39.723691       1 main.go:250] Node multinode-558947-m02 has CIDR [10.244.1.0/24] 
	I1205 20:17:39.723811       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I1205 20:17:39.723826       1 main.go:250] Node multinode-558947-m03 has CIDR [10.244.3.0/24] 
	I1205 20:17:49.737773       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:17:49.737838       1 main.go:227] handling current node
	I1205 20:17:49.737854       1 main.go:223] Handling node with IPs: map[192.168.39.10:{}]
	I1205 20:17:49.737860       1 main.go:250] Node multinode-558947-m02 has CIDR [10.244.1.0/24] 
	I1205 20:17:49.738061       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I1205 20:17:49.738069       1 main.go:250] Node multinode-558947-m03 has CIDR [10.244.3.0/24] 
	I1205 20:17:59.753214       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I1205 20:17:59.753280       1 main.go:227] handling current node
	I1205 20:17:59.753303       1 main.go:223] Handling node with IPs: map[192.168.39.10:{}]
	I1205 20:17:59.753312       1 main.go:250] Node multinode-558947-m02 has CIDR [10.244.1.0/24] 
	I1205 20:17:59.753428       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I1205 20:17:59.753435       1 main.go:250] Node multinode-558947-m03 has CIDR [10.244.2.0/24] 
	I1205 20:17:59.753520       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.248 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [ab3ab2d98f6d2c85fd3c50c92389da314f40638869c8ad2c7bd7cee55f4e0d83] <==
	* I1205 20:14:13.851864       1 establishing_controller.go:76] Starting EstablishingController
	I1205 20:14:13.851894       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1205 20:14:13.853024       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1205 20:14:13.853065       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1205 20:14:13.938680       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:14:13.939350       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 20:14:13.989525       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 20:14:13.990368       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 20:14:13.991428       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 20:14:13.991473       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 20:14:13.994791       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 20:14:13.996162       1 aggregator.go:166] initial CRD sync complete...
	I1205 20:14:13.996212       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 20:14:13.996236       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:14:13.996259       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:14:14.010232       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1205 20:14:14.010314       1 shared_informer.go:318] Caches are synced for configmaps
	I1205 20:14:14.795842       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:14:16.472461       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 20:14:16.592150       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 20:14:16.602573       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 20:14:16.680198       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:14:16.687840       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:14:26.505785       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 20:14:26.655467       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [733ef0e1895385a5aa71185eafbc70fa2012924aaff15fbc6e6669afecc52257] <==
	* I1205 20:16:13.542679       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m03"
	I1205 20:16:14.174750       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-phsxm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-phsxm"
	I1205 20:16:14.174857       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m03"
	I1205 20:16:14.177596       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-558947-m02\" does not exist"
	I1205 20:16:14.184117       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-558947-m02" podCIDRs=["10.244.1.0/24"]
	I1205 20:16:14.318850       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m02"
	I1205 20:16:15.093469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="76.303µs"
	I1205 20:16:28.343142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="258.326µs"
	I1205 20:16:28.939118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.419µs"
	I1205 20:16:28.942388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.813µs"
	I1205 20:16:46.593829       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m02"
	I1205 20:17:51.782854       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-p8lhl"
	I1205 20:17:51.795304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.455895ms"
	I1205 20:17:51.825677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.302591ms"
	I1205 20:17:51.825786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.218µs"
	I1205 20:17:51.825873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.501µs"
	I1205 20:17:53.203849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.356834ms"
	I1205 20:17:53.204376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="147.365µs"
	I1205 20:17:54.797177       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m02"
	I1205 20:17:55.487660       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-bxtwv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-bxtwv"
	I1205 20:17:55.488025       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m02"
	I1205 20:17:55.488072       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-558947-m03\" does not exist"
	I1205 20:17:55.517680       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-558947-m03" podCIDRs=["10.244.2.0/24"]
	I1205 20:17:55.532760       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-558947-m02"
	I1205 20:17:56.383208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.18µs"
	
	* 
	* ==> kube-proxy [fff3991eaeee56a89c873851712d2b66966872592df008ad2dd92c0aaba1ec6d] <==
	* I1205 20:14:15.646882       1 server_others.go:69] "Using iptables proxy"
	I1205 20:14:15.713784       1 node.go:141] Successfully retrieved node IP: 192.168.39.3
	I1205 20:14:15.770156       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:14:15.770235       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:14:15.791697       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:14:15.791770       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:14:15.792017       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:14:15.802406       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:14:15.808537       1 config.go:188] "Starting service config controller"
	I1205 20:14:15.808566       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:14:15.808582       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:14:15.808585       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:14:15.810797       1 config.go:315] "Starting node config controller"
	I1205 20:14:15.810808       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:14:15.909046       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:14:15.909112       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:14:15.946199       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8f6504160555c141e965b82f835fdc9c2e1046c55ac022878dd12859cb3f39ae] <==
	* I1205 20:14:11.011698       1 serving.go:348] Generated self-signed cert in-memory
	W1205 20:14:13.880827       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:14:13.881118       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:14:13.881130       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:14:13.881137       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:14:13.951057       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1205 20:14:13.954094       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:14:13.956582       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:14:13.956843       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:14:13.956891       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:14:13.957005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 20:14:14.057496       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:13:41 UTC, ends at Tue 2023-12-05 20:18:00 UTC. --
	Dec 05 20:14:17 multinode-558947 kubelet[917]: E1205 20:14:17.978883     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/448efe43-2e13-4b86-9c87-090ece8e686e-kube-api-access-zf29r podName:448efe43-2e13-4b86-9c87-090ece8e686e nodeName:}" failed. No retries permitted until 2023-12-05 20:14:21.978870448 +0000 UTC m=+14.946835381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zf29r" (UniqueName: "kubernetes.io/projected/448efe43-2e13-4b86-9c87-090ece8e686e-kube-api-access-zf29r") pod "busybox-5bc68d56bd-6www8" (UID: "448efe43-2e13-4b86-9c87-090ece8e686e") : object "default"/"kube-root-ca.crt" not registered
	Dec 05 20:14:18 multinode-558947 kubelet[917]: E1205 20:14:18.280791     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-knl4d" podUID="28d6c367-593c-469a-90c6-b3c13cedc3df"
	Dec 05 20:14:18 multinode-558947 kubelet[917]: E1205 20:14:18.280968     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-6www8" podUID="448efe43-2e13-4b86-9c87-090ece8e686e"
	Dec 05 20:14:20 multinode-558947 kubelet[917]: E1205 20:14:20.279579     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-knl4d" podUID="28d6c367-593c-469a-90c6-b3c13cedc3df"
	Dec 05 20:14:20 multinode-558947 kubelet[917]: E1205 20:14:20.280097     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-6www8" podUID="448efe43-2e13-4b86-9c87-090ece8e686e"
	Dec 05 20:14:21 multinode-558947 kubelet[917]: E1205 20:14:21.910184     917 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 20:14:21 multinode-558947 kubelet[917]: E1205 20:14:21.910312     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28d6c367-593c-469a-90c6-b3c13cedc3df-config-volume podName:28d6c367-593c-469a-90c6-b3c13cedc3df nodeName:}" failed. No retries permitted until 2023-12-05 20:14:29.910298598 +0000 UTC m=+22.878263535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/28d6c367-593c-469a-90c6-b3c13cedc3df-config-volume") pod "coredns-5dd5756b68-knl4d" (UID: "28d6c367-593c-469a-90c6-b3c13cedc3df") : object "kube-system"/"coredns" not registered
	Dec 05 20:14:22 multinode-558947 kubelet[917]: E1205 20:14:22.011208     917 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 05 20:14:22 multinode-558947 kubelet[917]: E1205 20:14:22.011282     917 projected.go:198] Error preparing data for projected volume kube-api-access-zf29r for pod default/busybox-5bc68d56bd-6www8: object "default"/"kube-root-ca.crt" not registered
	Dec 05 20:14:22 multinode-558947 kubelet[917]: E1205 20:14:22.011331     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/448efe43-2e13-4b86-9c87-090ece8e686e-kube-api-access-zf29r podName:448efe43-2e13-4b86-9c87-090ece8e686e nodeName:}" failed. No retries permitted until 2023-12-05 20:14:30.011317648 +0000 UTC m=+22.979282582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zf29r" (UniqueName: "kubernetes.io/projected/448efe43-2e13-4b86-9c87-090ece8e686e-kube-api-access-zf29r") pod "busybox-5bc68d56bd-6www8" (UID: "448efe43-2e13-4b86-9c87-090ece8e686e") : object "default"/"kube-root-ca.crt" not registered
	Dec 05 20:14:22 multinode-558947 kubelet[917]: E1205 20:14:22.280626     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-knl4d" podUID="28d6c367-593c-469a-90c6-b3c13cedc3df"
	Dec 05 20:14:22 multinode-558947 kubelet[917]: E1205 20:14:22.281036     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-6www8" podUID="448efe43-2e13-4b86-9c87-090ece8e686e"
	Dec 05 20:14:46 multinode-558947 kubelet[917]: I1205 20:14:46.480689     917 scope.go:117] "RemoveContainer" containerID="537ab3e1198318a237ac23148b01f5af6f746fe0bc23d022e22aace896c7e4af"
	Dec 05 20:15:07 multinode-558947 kubelet[917]: E1205 20:15:07.296210     917 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 20:15:07 multinode-558947 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:15:07 multinode-558947 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:15:07 multinode-558947 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:16:07 multinode-558947 kubelet[917]: E1205 20:16:07.299196     917 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 20:16:07 multinode-558947 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:16:07 multinode-558947 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:16:07 multinode-558947 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:17:07 multinode-558947 kubelet[917]: E1205 20:17:07.304321     917 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 20:17:07 multinode-558947 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:17:07 multinode-558947 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:17:07 multinode-558947 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-558947 -n multinode-558947
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-558947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (691.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558947 stop: exit status 82 (2m1.404231175s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-558947"  ...
	* Stopping node "multinode-558947"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-558947 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status
E1205 20:20:16.959452   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558947 status: exit status 3 (18.628039515s)

                                                
                                                
-- stdout --
	multinode-558947
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-558947-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:20:22.994647   32894 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E1205 20:20:22.994691   32894 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-558947 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-558947 -n multinode-558947
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-558947 -n multinode-558947: exit status 3 (3.16442116s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:20:26.322599   32980 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E1205 20:20:26.322623   32980 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-558947" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.20s)

                                                
                                    
x
+
TestPreload (279.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-967823 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 20:30:16.959547   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:30:49.699956   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-967823 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m18.271555779s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-967823 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-967823 image pull gcr.io/k8s-minikube/busybox: (1.095567579s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-967823
E1205 20:32:37.060734   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:32:46.651891   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-967823: exit status 82 (2m1.136343962s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-967823"  ...
	* Stopping node "test-preload-967823"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-967823 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-12-05 20:33:05.352821176 +0000 UTC m=+3498.165280811
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967823 -n test-preload-967823
E1205 20:33:20.010608   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-967823 -n test-preload-967823: exit status 3 (18.446042394s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:33:23.794648   35973 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host
	E1205 20:33:23.794667   35973 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-967823" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-967823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-967823
--- FAIL: TestPreload (279.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (147.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.4012105162.exe start -p running-upgrade-254896 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.4012105162.exe start -p running-upgrade-254896 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m20.766440705s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-254896 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-254896 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (4.432721489s)

                                                
                                                
-- stdout --
	* [running-upgrade-254896] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-254896 in cluster running-upgrade-254896
	* Updating the running kvm2 "running-upgrade-254896" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:37:44.464674   38658 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:37:44.464860   38658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:37:44.464868   38658 out.go:309] Setting ErrFile to fd 2...
	I1205 20:37:44.464874   38658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:37:44.465246   38658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:37:44.465962   38658 out.go:303] Setting JSON to false
	I1205 20:37:44.467633   38658 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4817,"bootTime":1701803847,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:37:44.467714   38658 start.go:138] virtualization: kvm guest
	I1205 20:37:44.470467   38658 out.go:177] * [running-upgrade-254896] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:37:44.472166   38658 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:37:44.472131   38658 notify.go:220] Checking for updates...
	I1205 20:37:44.475867   38658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:37:44.477245   38658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:37:44.478870   38658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:37:44.480268   38658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:37:44.481766   38658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:37:44.483821   38658 config.go:182] Loaded profile config "running-upgrade-254896": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:37:44.483858   38658 start_flags.go:694] config upgrade: Driver=kvm2
	I1205 20:37:44.483873   38658 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:37:44.483981   38658 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/running-upgrade-254896/config.json ...
	I1205 20:37:44.484919   38658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:44.484998   38658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:44.511921   38658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40413
	I1205 20:37:44.513757   38658 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:44.514531   38658 main.go:141] libmachine: Using API Version  1
	I1205 20:37:44.514553   38658 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:44.514926   38658 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:44.515181   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:44.517558   38658 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:37:44.519128   38658 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:37:44.519575   38658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:44.519619   38658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:44.544462   38658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I1205 20:37:44.545062   38658 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:44.545681   38658 main.go:141] libmachine: Using API Version  1
	I1205 20:37:44.545705   38658 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:44.546325   38658 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:44.546525   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:44.593402   38658 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:37:44.595031   38658 start.go:298] selected driver: kvm2
	I1205 20:37:44.595049   38658 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-254896 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.117 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:37:44.595177   38658 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:37:44.596150   38658 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.596250   38658 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:37:44.618913   38658 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:37:44.619450   38658 cni.go:84] Creating CNI manager for ""
	I1205 20:37:44.619474   38658 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1205 20:37:44.619486   38658 start_flags.go:323] config:
	{Name:running-upgrade-254896 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.117 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:37:44.619709   38658 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.623006   38658 out.go:177] * Starting control plane node running-upgrade-254896 in cluster running-upgrade-254896
	I1205 20:37:44.624421   38658 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1205 20:37:44.654416   38658 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:37:44.654580   38658 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/running-upgrade-254896/config.json ...
	I1205 20:37:44.654891   38658 start.go:365] acquiring machines lock for running-upgrade-254896: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:37:44.654960   38658 start.go:369] acquired machines lock for "running-upgrade-254896" in 42.497µs
	I1205 20:37:44.654978   38658 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:37:44.654988   38658 fix.go:54] fixHost starting: minikube
	I1205 20:37:44.655404   38658 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:37:44.655448   38658 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:37:44.655758   38658 cache.go:107] acquiring lock: {Name:mk28353803396fbe7c185ace9b7c2f728ad7e4f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.655829   38658 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:37:44.655841   38658 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.013µs
	I1205 20:37:44.655850   38658 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:37:44.655864   38658 cache.go:107] acquiring lock: {Name:mk7462b7dbcc5e2ca3994d68c9cee18f46803b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.655965   38658 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1205 20:37:44.656117   38658 cache.go:107] acquiring lock: {Name:mk0522e775dff44d620aa8a2c87b50c7de8a7987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.656202   38658 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1205 20:37:44.656324   38658 cache.go:107] acquiring lock: {Name:mkc10e94e0d249944b279af624ed092b227942a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.656408   38658 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1205 20:37:44.656507   38658 cache.go:107] acquiring lock: {Name:mk7098894afec2c26df02d2aad041391ca05429b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.656584   38658 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1205 20:37:44.656726   38658 cache.go:107] acquiring lock: {Name:mk9a0c91c42e5817c5aa4fb2cc05648531c85742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.656780   38658 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:37:44.656883   38658 cache.go:107] acquiring lock: {Name:mk63236ff2284399ef1084f4868ac79057954ac7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.656960   38658 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1205 20:37:44.657044   38658 cache.go:107] acquiring lock: {Name:mkda0b4f34f825dfe4e0303962ddcaa304551402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:37:44.657121   38658 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1205 20:37:44.658945   38658 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:37:44.658982   38658 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1205 20:37:44.659089   38658 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1205 20:37:44.659424   38658 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1205 20:37:44.659712   38658 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1205 20:37:44.659872   38658 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1205 20:37:44.660821   38658 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1205 20:37:44.680646   38658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I1205 20:37:44.680996   38658 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:37:44.681712   38658 main.go:141] libmachine: Using API Version  1
	I1205 20:37:44.681729   38658 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:37:44.682112   38658 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:37:44.682305   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:44.682504   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetState
	I1205 20:37:44.684372   38658 fix.go:102] recreateIfNeeded on running-upgrade-254896: state=Running err=<nil>
	W1205 20:37:44.684416   38658 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:37:44.686246   38658 out.go:177] * Updating the running kvm2 "running-upgrade-254896" VM ...
	I1205 20:37:44.688808   38658 machine.go:88] provisioning docker machine ...
	I1205 20:37:44.688842   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:44.689468   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetMachineName
	I1205 20:37:44.689636   38658 buildroot.go:166] provisioning hostname "running-upgrade-254896"
	I1205 20:37:44.689656   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetMachineName
	I1205 20:37:44.690364   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:44.694020   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:44.694074   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:44.694107   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:44.695214   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:44.695509   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:44.698743   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:44.704058   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:44.704262   38658 main.go:141] libmachine: Using SSH client type: native
	I1205 20:37:44.704741   38658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I1205 20:37:44.704760   38658 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-254896 && echo "running-upgrade-254896" | sudo tee /etc/hostname
	I1205 20:37:44.835747   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:37:44.844807   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1205 20:37:44.853391   38658 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-254896
	
	I1205 20:37:44.853424   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:44.856665   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:44.856978   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:44.857034   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:44.857313   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:44.857596   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:44.857774   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:44.857974   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:44.858184   38658 main.go:141] libmachine: Using SSH client type: native
	I1205 20:37:44.858715   38658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I1205 20:37:44.858771   38658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-254896' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-254896/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-254896' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:37:44.891105   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1205 20:37:44.896480   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1205 20:37:44.906877   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1205 20:37:44.911623   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1205 20:37:44.914664   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1205 20:37:44.914694   38658 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 257.970838ms
	I1205 20:37:44.914708   38658 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1205 20:37:44.922528   38658 cache.go:162] opening:  /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1205 20:37:45.030493   38658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:37:45.030528   38658 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:37:45.030565   38658 buildroot.go:174] setting up certificates
	I1205 20:37:45.030578   38658 provision.go:83] configureAuth start
	I1205 20:37:45.030607   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetMachineName
	I1205 20:37:45.030907   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetIP
	I1205 20:37:45.040771   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:45.040854   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.040877   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:45.040902   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.045651   38658 provision.go:138] copyHostCerts
	I1205 20:37:45.045669   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.045707   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:45.045715   38658 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:37:45.045725   38658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:37:45.045731   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.045783   38658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:37:45.045907   38658 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:37:45.045914   38658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:37:45.045942   38658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:37:45.046021   38658 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:37:45.046027   38658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:37:45.046056   38658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:37:45.046186   38658 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-254896 san=[192.168.50.117 192.168.50.117 localhost 127.0.0.1 minikube running-upgrade-254896]
	I1205 20:37:45.188206   38658 provision.go:172] copyRemoteCerts
	I1205 20:37:45.188305   38658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:37:45.188336   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:45.192468   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:45.192560   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.192589   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:45.192618   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:45.192631   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.192762   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:45.192916   38658 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/running-upgrade-254896/id_rsa Username:docker}
	I1205 20:37:45.332326   38658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:37:45.362544   38658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:37:45.404585   38658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:37:45.441203   38658 provision.go:86] duration metric: configureAuth took 410.611391ms
	I1205 20:37:45.441242   38658 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:37:45.441453   38658 config.go:182] Loaded profile config "running-upgrade-254896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:37:45.441542   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:45.446375   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:45.446376   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.446456   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:45.446494   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:45.447105   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:45.447683   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:45.447938   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:45.448102   38658 main.go:141] libmachine: Using SSH client type: native
	I1205 20:37:45.448604   38658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I1205 20:37:45.448626   38658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:37:45.523343   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1205 20:37:45.523425   38658 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 866.381623ms
	I1205 20:37:45.523454   38658 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1205 20:37:45.885386   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1205 20:37:45.885411   38658 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.229089211s
	I1205 20:37:45.885442   38658 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1205 20:37:46.048352   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1205 20:37:46.048447   38658 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.392330069s
	I1205 20:37:46.048486   38658 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1205 20:37:46.286357   38658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:37:46.286399   38658 machine.go:91] provisioned docker machine in 1.597570231s
	I1205 20:37:46.286411   38658 start.go:300] post-start starting for "running-upgrade-254896" (driver="kvm2")
	I1205 20:37:46.286422   38658 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:37:46.286447   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:46.286763   38658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:37:46.286790   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:46.432023   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1205 20:37:46.432109   38658 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.776238041s
	I1205 20:37:46.432136   38658 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1205 20:37:46.507852   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.508749   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:46.508783   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.508839   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:46.509037   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:46.509161   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:46.509270   38658 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/running-upgrade-254896/id_rsa Username:docker}
	I1205 20:37:46.594295   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1205 20:37:46.594328   38658 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.937444249s
	I1205 20:37:46.594343   38658 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1205 20:37:46.654779   38658 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:37:46.663287   38658 info.go:137] Remote host: Buildroot 2019.02.7
	I1205 20:37:46.663316   38658 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:37:46.663389   38658 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:37:46.663580   38658 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:37:46.663745   38658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:37:46.683198   38658 cache.go:157] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1205 20:37:46.683290   38658 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 2.026782869s
	I1205 20:37:46.683319   38658 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1205 20:37:46.683364   38658 cache.go:87] Successfully saved all images to host disk.
	I1205 20:37:46.690731   38658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:37:46.710206   38658 start.go:303] post-start completed in 423.779645ms
	I1205 20:37:46.710332   38658 fix.go:56] fixHost completed within 2.055340881s
	I1205 20:37:46.710371   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:46.713501   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.713878   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:46.713905   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.714251   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:46.714499   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:46.714692   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:46.714867   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:46.715097   38658 main.go:141] libmachine: Using SSH client type: native
	I1205 20:37:46.715393   38658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I1205 20:37:46.715404   38658 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:37:46.876570   38658 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701808666.871661466
	
	I1205 20:37:46.876590   38658 fix.go:206] guest clock: 1701808666.871661466
	I1205 20:37:46.876600   38658 fix.go:219] Guest: 2023-12-05 20:37:46.871661466 +0000 UTC Remote: 2023-12-05 20:37:46.710350792 +0000 UTC m=+2.340375029 (delta=161.310674ms)
	I1205 20:37:46.876621   38658 fix.go:190] guest clock delta is within tolerance: 161.310674ms
	I1205 20:37:46.876627   38658 start.go:83] releasing machines lock for "running-upgrade-254896", held for 2.221657835s
	I1205 20:37:46.876647   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:46.876892   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetIP
	I1205 20:37:46.879877   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.880385   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:46.880411   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.880586   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:46.881281   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:46.881479   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .DriverName
	I1205 20:37:46.881580   38658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:37:46.881628   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:46.881716   38658 ssh_runner.go:195] Run: cat /version.json
	I1205 20:37:46.881742   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHHostname
	I1205 20:37:46.885102   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.885509   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.886174   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:46.886222   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.886250   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:3e:d9", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:35:56 +0000 UTC Type:0 Mac:52:54:00:3a:3e:d9 Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:running-upgrade-254896 Clientid:01:52:54:00:3a:3e:d9}
	I1205 20:37:46.886268   38658 main.go:141] libmachine: (running-upgrade-254896) DBG | domain running-upgrade-254896 has defined IP address 192.168.50.117 and MAC address 52:54:00:3a:3e:d9 in network minikube-net
	I1205 20:37:46.886478   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:46.886492   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHPort
	I1205 20:37:46.886671   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:46.886677   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHKeyPath
	I1205 20:37:46.886829   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:46.886831   38658 main.go:141] libmachine: (running-upgrade-254896) Calling .GetSSHUsername
	I1205 20:37:46.886980   38658 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/running-upgrade-254896/id_rsa Username:docker}
	I1205 20:37:46.887006   38658 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/running-upgrade-254896/id_rsa Username:docker}
	W1205 20:37:46.988606   38658 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:37:46.988687   38658 ssh_runner.go:195] Run: systemctl --version
	I1205 20:37:47.025043   38658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:37:47.110757   38658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:37:47.123380   38658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:37:47.123486   38658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:37:47.130519   38658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:37:47.130549   38658 start.go:475] detecting cgroup driver to use...
	I1205 20:37:47.130619   38658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:37:47.145416   38658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:37:47.157380   38658 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:37:47.157443   38658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:37:47.173001   38658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:37:47.185104   38658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:37:47.199836   38658 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:37:47.199901   38658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:37:47.407950   38658 docker.go:219] disabling docker service ...
	I1205 20:37:47.408031   38658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:37:48.439348   38658 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.031292141s)
	I1205 20:37:48.439415   38658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:37:48.457328   38658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:37:48.592383   38658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:37:48.742997   38658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:37:48.755313   38658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:37:48.770629   38658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:37:48.770705   38658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:37:48.780721   38658 out.go:177] 
	W1205 20:37:48.782072   38658 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:37:48.782093   38658 out.go:239] * 
	* 
	W1205 20:37:48.783298   38658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:37:48.784756   38658 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-254896 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-05 20:37:48.806972245 +0000 UTC m=+3781.619431869
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-254896 -n running-upgrade-254896
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-254896 -n running-upgrade-254896: exit status 4 (445.37022ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:37:49.146779   38911 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-254896" does not appear in /home/jenkins/minikube-integration/17731-6237/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-254896" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-254896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-254896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-254896: (1.478675984s)
--- FAIL: TestRunningBinaryUpgrade (147.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (288.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2315303044.exe start -p stopped-upgrade-601680 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2315303044.exe start -p stopped-upgrade-601680 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m18.336023997s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2315303044.exe -p stopped-upgrade-601680 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2315303044.exe -p stopped-upgrade-601680 stop: (1m33.202467075s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-601680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-601680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (57.285186954s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-601680] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-601680 in cluster stopped-upgrade-601680
	* Restarting existing kvm2 VM for "stopped-upgrade-601680" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:40:24.237811   42939 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:40:24.237960   42939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:40:24.237971   42939 out.go:309] Setting ErrFile to fd 2...
	I1205 20:40:24.237976   42939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:40:24.238216   42939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:40:24.238835   42939 out.go:303] Setting JSON to false
	I1205 20:40:24.239889   42939 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4977,"bootTime":1701803847,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:40:24.239952   42939 start.go:138] virtualization: kvm guest
	I1205 20:40:24.242608   42939 out.go:177] * [stopped-upgrade-601680] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:40:24.244260   42939 notify.go:220] Checking for updates...
	I1205 20:40:24.244271   42939 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:40:24.245828   42939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:40:24.247310   42939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:40:24.249239   42939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:40:24.250695   42939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:40:24.252191   42939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:40:24.253962   42939 config.go:182] Loaded profile config "stopped-upgrade-601680": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:40:24.253980   42939 start_flags.go:694] config upgrade: Driver=kvm2
	I1205 20:40:24.253991   42939 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f
	I1205 20:40:24.254066   42939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/stopped-upgrade-601680/config.json ...
	I1205 20:40:24.254689   42939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:24.254764   42939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:24.269888   42939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I1205 20:40:24.270267   42939 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:24.270860   42939 main.go:141] libmachine: Using API Version  1
	I1205 20:40:24.270908   42939 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:24.271223   42939 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:24.271412   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:40:24.273576   42939 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1205 20:40:24.274992   42939 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:40:24.275287   42939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:24.275351   42939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:24.290778   42939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I1205 20:40:24.291169   42939 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:24.291703   42939 main.go:141] libmachine: Using API Version  1
	I1205 20:40:24.291726   42939 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:24.292189   42939 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:24.292531   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:40:24.330077   42939 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:40:24.331524   42939 start.go:298] selected driver: kvm2
	I1205 20:40:24.331545   42939 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-601680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.127 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:40:24.331654   42939 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:40:24.332644   42939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.332748   42939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:40:24.349167   42939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:40:24.349669   42939 cni.go:84] Creating CNI manager for ""
	I1205 20:40:24.349701   42939 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1205 20:40:24.349714   42939 start_flags.go:323] config:
	{Name:stopped-upgrade-601680 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.127 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1205 20:40:24.349958   42939 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.353085   42939 out.go:177] * Starting control plane node stopped-upgrade-601680 in cluster stopped-upgrade-601680
	I1205 20:40:24.354697   42939 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1205 20:40:24.381828   42939 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:40:24.382015   42939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/stopped-upgrade-601680/config.json ...
	I1205 20:40:24.382125   42939 cache.go:107] acquiring lock: {Name:mkc10e94e0d249944b279af624ed092b227942a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382173   42939 cache.go:107] acquiring lock: {Name:mkda0b4f34f825dfe4e0303962ddcaa304551402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382200   42939 cache.go:107] acquiring lock: {Name:mk63236ff2284399ef1084f4868ac79057954ac7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382232   42939 cache.go:107] acquiring lock: {Name:mk7462b7dbcc5e2ca3994d68c9cee18f46803b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382247   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1205 20:40:24.382219   42939 cache.go:107] acquiring lock: {Name:mk9a0c91c42e5817c5aa4fb2cc05648531c85742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382261   42939 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 145.7µs
	I1205 20:40:24.382534   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1205 20:40:24.382566   42939 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 336.072µs
	I1205 20:40:24.382579   42939 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1205 20:40:24.382599   42939 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1205 20:40:24.382609   42939 cache.go:107] acquiring lock: {Name:mk0522e775dff44d620aa8a2c87b50c7de8a7987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382290   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1205 20:40:24.382658   42939 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 464.716µs
	I1205 20:40:24.382673   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1205 20:40:24.382676   42939 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1205 20:40:24.382124   42939 cache.go:107] acquiring lock: {Name:mk28353803396fbe7c185ace9b7c2f728ad7e4f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382680   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1205 20:40:24.382683   42939 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 77.758µs
	I1205 20:40:24.382699   42939 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 525.494µs
	I1205 20:40:24.382707   42939 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1205 20:40:24.382710   42939 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1205 20:40:24.382696   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1205 20:40:24.382725   42939 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 531.847µs
	I1205 20:40:24.382736   42939 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1205 20:40:24.382720   42939 cache.go:107] acquiring lock: {Name:mk7098894afec2c26df02d2aad041391ca05429b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382626   42939 start.go:365] acquiring machines lock for stopped-upgrade-601680: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:40:24.383135   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1205 20:40:24.383162   42939 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 447.363µs
	I1205 20:40:24.383172   42939 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1205 20:40:24.383203   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:40:24.383209   42939 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.100355ms
	I1205 20:40:24.383218   42939 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:40:24.383225   42939 cache.go:87] Successfully saved all images to host disk.
	I1205 20:40:38.687271   42939 start.go:369] acquired machines lock for "stopped-upgrade-601680" in 14.304459763s
	I1205 20:40:38.687332   42939 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:40:38.687340   42939 fix.go:54] fixHost starting: minikube
	I1205 20:40:38.687760   42939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:38.687801   42939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:38.704694   42939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1205 20:40:38.705105   42939 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:38.705564   42939 main.go:141] libmachine: Using API Version  1
	I1205 20:40:38.705584   42939 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:38.705963   42939 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:38.706155   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:40:38.706312   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetState
	I1205 20:40:38.708010   42939 fix.go:102] recreateIfNeeded on stopped-upgrade-601680: state=Stopped err=<nil>
	I1205 20:40:38.708044   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	W1205 20:40:38.708238   42939 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:40:38.711305   42939 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-601680" ...
	I1205 20:40:38.712925   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .Start
	I1205 20:40:38.713119   42939 main.go:141] libmachine: (stopped-upgrade-601680) Ensuring networks are active...
	I1205 20:40:38.713937   42939 main.go:141] libmachine: (stopped-upgrade-601680) Ensuring network default is active
	I1205 20:40:38.714287   42939 main.go:141] libmachine: (stopped-upgrade-601680) Ensuring network minikube-net is active
	I1205 20:40:38.714673   42939 main.go:141] libmachine: (stopped-upgrade-601680) Getting domain xml...
	I1205 20:40:38.715360   42939 main.go:141] libmachine: (stopped-upgrade-601680) Creating domain...
	I1205 20:40:40.046007   42939 main.go:141] libmachine: (stopped-upgrade-601680) Waiting to get IP...
	I1205 20:40:40.047125   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.047603   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.047721   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.047580   43173 retry.go:31] will retry after 241.913322ms: waiting for machine to come up
	I1205 20:40:40.291365   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.291861   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.291914   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.291830   43173 retry.go:31] will retry after 261.894859ms: waiting for machine to come up
	I1205 20:40:40.555507   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.556151   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.556287   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.556239   43173 retry.go:31] will retry after 372.036636ms: waiting for machine to come up
	I1205 20:40:40.930108   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.930945   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.930973   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.930837   43173 retry.go:31] will retry after 446.265845ms: waiting for machine to come up
	I1205 20:40:41.378530   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:41.379031   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:41.379093   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:41.378993   43173 retry.go:31] will retry after 661.612365ms: waiting for machine to come up
	I1205 20:40:42.041832   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:42.042668   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:42.042692   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:42.042589   43173 retry.go:31] will retry after 730.768928ms: waiting for machine to come up
	I1205 20:40:42.774829   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:42.775400   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:42.775443   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:42.775344   43173 retry.go:31] will retry after 1.118611444s: waiting for machine to come up
	I1205 20:40:43.895862   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:43.896354   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:43.896387   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:43.896292   43173 retry.go:31] will retry after 980.173523ms: waiting for machine to come up
	I1205 20:40:44.878602   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:44.879167   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:44.879200   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:44.879127   43173 retry.go:31] will retry after 1.385044864s: waiting for machine to come up
	I1205 20:40:46.265240   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:46.265670   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:46.265700   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:46.265623   43173 retry.go:31] will retry after 1.740287111s: waiting for machine to come up
	I1205 20:40:48.008178   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:48.008727   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:48.008757   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:48.008671   43173 retry.go:31] will retry after 1.931230772s: waiting for machine to come up
	I1205 20:40:49.942075   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:49.942587   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:49.942618   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:49.942527   43173 retry.go:31] will retry after 3.401653628s: waiting for machine to come up
	I1205 20:40:53.348042   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:53.348595   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:53.348624   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:53.348542   43173 retry.go:31] will retry after 3.558160616s: waiting for machine to come up
	I1205 20:40:56.907835   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:56.908361   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:56.908392   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:56.908307   43173 retry.go:31] will retry after 5.657646006s: waiting for machine to come up
	I1205 20:41:03.000031   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:03.000996   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:41:03.001024   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:41:03.000967   43173 retry.go:31] will retry after 5.289537041s: waiting for machine to come up
	I1205 20:41:08.291684   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:08.292229   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:41:08.292266   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:41:08.292198   43173 retry.go:31] will retry after 8.515055508s: waiting for machine to come up
	I1205 20:41:16.810730   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.811156   42939 main.go:141] libmachine: (stopped-upgrade-601680) Found IP for machine: 192.168.50.127
	I1205 20:41:16.811180   42939 main.go:141] libmachine: (stopped-upgrade-601680) Reserving static IP address...
	I1205 20:41:16.811198   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has current primary IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.811763   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "stopped-upgrade-601680", mac: "52:54:00:93:33:e4", ip: "192.168.50.127"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:16.811810   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-601680", mac: "52:54:00:93:33:e4", ip: "192.168.50.127"}
	I1205 20:41:16.811824   42939 main.go:141] libmachine: (stopped-upgrade-601680) Reserved static IP address: 192.168.50.127
	I1205 20:41:16.811848   42939 main.go:141] libmachine: (stopped-upgrade-601680) Waiting for SSH to be available...
	I1205 20:41:16.811862   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | Getting to WaitForSSH function...
	I1205 20:41:16.814077   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.814490   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:16.814519   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.814663   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | Using SSH client type: external
	I1205 20:41:16.814684   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/stopped-upgrade-601680/id_rsa (-rw-------)
	I1205 20:41:16.814728   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/stopped-upgrade-601680/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:41:16.814747   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | About to run SSH command:
	I1205 20:41:16.814770   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | exit 0
	I1205 20:41:16.941538   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | SSH cmd err, output: <nil>: 
	I1205 20:41:16.941917   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetConfigRaw
	I1205 20:41:16.942575   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetIP
	I1205 20:41:16.945160   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.945525   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:16.945562   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.945769   42939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/stopped-upgrade-601680/config.json ...
	I1205 20:41:16.946008   42939 machine.go:88] provisioning docker machine ...
	I1205 20:41:16.946026   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:41:16.946221   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetMachineName
	I1205 20:41:16.946388   42939 buildroot.go:166] provisioning hostname "stopped-upgrade-601680"
	I1205 20:41:16.946405   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetMachineName
	I1205 20:41:16.946565   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:16.948294   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.948578   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:16.948608   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:16.948683   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:16.948845   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:16.948987   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:16.949095   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:16.949259   42939 main.go:141] libmachine: Using SSH client type: native
	I1205 20:41:16.949632   42939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.127 22 <nil> <nil>}
	I1205 20:41:16.949653   42939 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-601680 && echo "stopped-upgrade-601680" | sudo tee /etc/hostname
	I1205 20:41:17.065142   42939 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-601680
	
	I1205 20:41:17.065169   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:17.067714   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.068004   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:17.068035   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.068199   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:17.068392   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:17.068538   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:17.068676   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:17.068869   42939 main.go:141] libmachine: Using SSH client type: native
	I1205 20:41:17.069185   42939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.127 22 <nil> <nil>}
	I1205 20:41:17.069204   42939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-601680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-601680/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-601680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:41:17.182490   42939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:41:17.182521   42939 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:41:17.182541   42939 buildroot.go:174] setting up certificates
	I1205 20:41:17.182549   42939 provision.go:83] configureAuth start
	I1205 20:41:17.182557   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetMachineName
	I1205 20:41:17.182833   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetIP
	I1205 20:41:17.185364   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.185727   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:17.185768   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.185959   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:17.187883   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.188150   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:17.188180   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.188251   42939 provision.go:138] copyHostCerts
	I1205 20:41:17.188302   42939 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:41:17.188322   42939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:41:17.188404   42939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:41:17.188503   42939 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:41:17.188516   42939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:41:17.188555   42939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:41:17.188623   42939 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:41:17.188634   42939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:41:17.188669   42939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:41:17.188734   42939 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-601680 san=[192.168.50.127 192.168.50.127 localhost 127.0.0.1 minikube stopped-upgrade-601680]
	I1205 20:41:17.334589   42939 provision.go:172] copyRemoteCerts
	I1205 20:41:17.334645   42939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:41:17.334667   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:17.337049   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.337413   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:17.337446   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.337614   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:17.337796   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:17.337948   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:17.338087   42939 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/stopped-upgrade-601680/id_rsa Username:docker}
	I1205 20:41:17.420204   42939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:41:17.433736   42939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:41:17.446440   42939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:41:17.459124   42939 provision.go:86] duration metric: configureAuth took 276.5647ms
	I1205 20:41:17.459148   42939 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:41:17.459314   42939 config.go:182] Loaded profile config "stopped-upgrade-601680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:41:17.459384   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:17.461806   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.462171   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:17.462207   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:17.462366   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:17.462554   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:17.462744   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:17.462924   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:17.463083   42939 main.go:141] libmachine: Using SSH client type: native
	I1205 20:41:17.463385   42939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.127 22 <nil> <nil>}
	I1205 20:41:17.463402   42939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:41:20.651807   42939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:41:20.651840   42939 machine.go:91] provisioned docker machine in 3.705812098s
	I1205 20:41:20.651850   42939 start.go:300] post-start starting for "stopped-upgrade-601680" (driver="kvm2")
	I1205 20:41:20.651863   42939 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:41:20.651882   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:41:20.652230   42939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:41:20.652256   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:20.654636   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.654924   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:20.654954   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.655113   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:20.655315   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:20.655546   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:20.655685   42939 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/stopped-upgrade-601680/id_rsa Username:docker}
	I1205 20:41:20.737377   42939 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:41:20.741421   42939 info.go:137] Remote host: Buildroot 2019.02.7
	I1205 20:41:20.741449   42939 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:41:20.741520   42939 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:41:20.741624   42939 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:41:20.741736   42939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:41:20.747182   42939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:41:20.760289   42939 start.go:303] post-start completed in 108.425409ms
	I1205 20:41:20.760309   42939 fix.go:56] fixHost completed within 42.072969342s
	I1205 20:41:20.760333   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:20.763063   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.763430   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:20.763462   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.763641   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:20.763822   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:20.763994   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:20.764102   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:20.764280   42939 main.go:141] libmachine: Using SSH client type: native
	I1205 20:41:20.764571   42939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.127 22 <nil> <nil>}
	I1205 20:41:20.764582   42939 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:41:20.876040   42939 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701808880.805069345
	
	I1205 20:41:20.876064   42939 fix.go:206] guest clock: 1701808880.805069345
	I1205 20:41:20.876074   42939 fix.go:219] Guest: 2023-12-05 20:41:20.805069345 +0000 UTC Remote: 2023-12-05 20:41:20.760313776 +0000 UTC m=+56.572299520 (delta=44.755569ms)
	I1205 20:41:20.876100   42939 fix.go:190] guest clock delta is within tolerance: 44.755569ms
	I1205 20:41:20.876107   42939 start.go:83] releasing machines lock for "stopped-upgrade-601680", held for 42.188798699s
	I1205 20:41:20.876135   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:41:20.876402   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetIP
	I1205 20:41:20.879100   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.879485   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:20.879517   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.879636   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:41:20.880217   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:41:20.880433   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:41:20.880523   42939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:41:20.880563   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:20.880653   42939 ssh_runner.go:195] Run: cat /version.json
	I1205 20:41:20.880673   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHHostname
	I1205 20:41:20.883142   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.883432   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:20.883465   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.883488   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.883605   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:20.883757   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:20.883842   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:33:e4", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-12-05 21:41:06 +0000 UTC Type:0 Mac:52:54:00:93:33:e4 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:stopped-upgrade-601680 Clientid:01:52:54:00:93:33:e4}
	I1205 20:41:20.883866   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined IP address 192.168.50.127 and MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:41:20.883931   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:20.884037   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHPort
	I1205 20:41:20.884179   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHKeyPath
	I1205 20:41:20.884213   42939 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/stopped-upgrade-601680/id_rsa Username:docker}
	I1205 20:41:20.884313   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetSSHUsername
	I1205 20:41:20.884429   42939 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/stopped-upgrade-601680/id_rsa Username:docker}
	W1205 20:41:20.990531   42939 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 20:41:20.990595   42939 ssh_runner.go:195] Run: systemctl --version
	I1205 20:41:20.996080   42939 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:41:21.075433   42939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:41:21.081124   42939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:41:21.081203   42939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:41:21.086830   42939 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:41:21.086855   42939 start.go:475] detecting cgroup driver to use...
	I1205 20:41:21.086920   42939 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:41:21.097459   42939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:41:21.106414   42939 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:41:21.106485   42939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:41:21.115162   42939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:41:21.126350   42939 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1205 20:41:21.134745   42939 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1205 20:41:21.134809   42939 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:41:21.230492   42939 docker.go:219] disabling docker service ...
	I1205 20:41:21.230576   42939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:41:21.242376   42939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:41:21.250187   42939 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:41:21.339848   42939 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:41:21.434646   42939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:41:21.442498   42939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:41:21.453389   42939 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:41:21.453454   42939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:41:21.461631   42939 out.go:177] 
	W1205 20:41:21.462880   42939 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1205 20:41:21.462898   42939 out.go:239] * 
	* 
	W1205 20:41:21.463687   42939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:41:21.465470   42939 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-601680 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (288.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (105.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-405510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-405510 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.902059249s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-405510] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-405510 in cluster pause-405510
	* Updating the running kvm2 "pause-405510" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-405510" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:39:20.069333   42248 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:39:20.069491   42248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:20.069505   42248 out.go:309] Setting ErrFile to fd 2...
	I1205 20:39:20.069512   42248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:20.069837   42248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:39:20.070884   42248 out.go:303] Setting JSON to false
	I1205 20:39:20.072280   42248 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4913,"bootTime":1701803847,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:39:20.072363   42248 start.go:138] virtualization: kvm guest
	I1205 20:39:20.074231   42248 out.go:177] * [pause-405510] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:39:20.076158   42248 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:39:20.076211   42248 notify.go:220] Checking for updates...
	I1205 20:39:20.079280   42248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:39:20.082174   42248 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:39:20.083669   42248 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:39:20.085114   42248 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:39:20.086680   42248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:39:20.088724   42248 config.go:182] Loaded profile config "pause-405510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:39:20.089285   42248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:39:20.089353   42248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:39:20.106724   42248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I1205 20:39:20.107249   42248 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:39:20.107875   42248 main.go:141] libmachine: Using API Version  1
	I1205 20:39:20.107899   42248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:39:20.108278   42248 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:39:20.108462   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:39:20.108734   42248 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:39:20.109115   42248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:39:20.109176   42248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:39:20.126926   42248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I1205 20:39:20.127433   42248 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:39:20.127999   42248 main.go:141] libmachine: Using API Version  1
	I1205 20:39:20.128032   42248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:39:20.128410   42248 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:39:20.128683   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:39:20.169501   42248 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:39:20.170784   42248 start.go:298] selected driver: kvm2
	I1205 20:39:20.170804   42248 start.go:902] validating driver "kvm2" against &{Name:pause-405510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-405510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:39:20.170979   42248 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:39:20.171418   42248 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:39:20.171508   42248 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:39:20.186588   42248 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:39:20.187267   42248 cni.go:84] Creating CNI manager for ""
	I1205 20:39:20.187285   42248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:39:20.187294   42248 start_flags.go:323] config:
	{Name:pause-405510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-405510 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:39:20.187461   42248 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:39:20.190288   42248 out.go:177] * Starting control plane node pause-405510 in cluster pause-405510
	I1205 20:39:20.191698   42248 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:39:20.191752   42248 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:39:20.191766   42248 cache.go:56] Caching tarball of preloaded images
	I1205 20:39:20.191890   42248 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:39:20.191902   42248 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:39:20.192070   42248 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/config.json ...
	I1205 20:39:20.192334   42248 start.go:365] acquiring machines lock for pause-405510: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:40:02.947191   42248 start.go:369] acquired machines lock for "pause-405510" in 42.754823911s
	I1205 20:40:02.947243   42248 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:40:02.947258   42248 fix.go:54] fixHost starting: 
	I1205 20:40:02.947659   42248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:02.947711   42248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:02.963497   42248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1205 20:40:02.963885   42248 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:02.964444   42248 main.go:141] libmachine: Using API Version  1
	I1205 20:40:02.964468   42248 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:02.965149   42248 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:02.965530   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:02.966424   42248 main.go:141] libmachine: (pause-405510) Calling .GetState
	I1205 20:40:02.968235   42248 fix.go:102] recreateIfNeeded on pause-405510: state=Running err=<nil>
	W1205 20:40:02.968273   42248 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:40:02.970631   42248 out.go:177] * Updating the running kvm2 "pause-405510" VM ...
	I1205 20:40:02.972692   42248 machine.go:88] provisioning docker machine ...
	I1205 20:40:02.972712   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:02.972903   42248 main.go:141] libmachine: (pause-405510) Calling .GetMachineName
	I1205 20:40:02.973085   42248 buildroot.go:166] provisioning hostname "pause-405510"
	I1205 20:40:02.973100   42248 main.go:141] libmachine: (pause-405510) Calling .GetMachineName
	I1205 20:40:02.973222   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:02.975790   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:02.976181   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:02.976207   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:02.976385   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:02.976561   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:02.976712   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:02.976859   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:02.977021   42248 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:02.977489   42248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I1205 20:40:02.977510   42248 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-405510 && echo "pause-405510" | sudo tee /etc/hostname
	I1205 20:40:03.121160   42248 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-405510
	
	I1205 20:40:03.121211   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:03.124295   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.124713   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:03.124742   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.124956   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:03.125217   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:03.125391   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:03.125518   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:03.125742   42248 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:03.126239   42248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I1205 20:40:03.126259   42248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-405510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-405510/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-405510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:40:03.251736   42248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:40:03.251761   42248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:40:03.251792   42248 buildroot.go:174] setting up certificates
	I1205 20:40:03.251798   42248 provision.go:83] configureAuth start
	I1205 20:40:03.251808   42248 main.go:141] libmachine: (pause-405510) Calling .GetMachineName
	I1205 20:40:03.252091   42248 main.go:141] libmachine: (pause-405510) Calling .GetIP
	I1205 20:40:03.254820   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.255194   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:03.255248   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.255429   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:03.257770   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.258134   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:03.258165   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.258257   42248 provision.go:138] copyHostCerts
	I1205 20:40:03.258327   42248 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:40:03.258340   42248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:40:03.258417   42248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:40:03.258552   42248 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:40:03.258565   42248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:40:03.258597   42248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:40:03.258683   42248 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:40:03.258694   42248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:40:03.258720   42248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:40:03.258794   42248 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.pause-405510 san=[192.168.72.159 192.168.72.159 localhost 127.0.0.1 minikube pause-405510]
	I1205 20:40:03.339320   42248 provision.go:172] copyRemoteCerts
	I1205 20:40:03.339377   42248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:40:03.339400   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:03.342428   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.342830   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:03.342865   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.343115   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:03.343317   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:03.343484   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:03.343654   42248 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/pause-405510/id_rsa Username:docker}
	I1205 20:40:03.444337   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:40:03.473668   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 20:40:03.513010   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:40:03.539456   42248 provision.go:86] duration metric: configureAuth took 287.645026ms
	I1205 20:40:03.539485   42248 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:40:03.539701   42248 config.go:182] Loaded profile config "pause-405510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:40:03.539779   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:03.542748   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.543080   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:03.543110   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:03.543295   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:03.543483   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:03.543664   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:03.543815   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:03.543976   42248 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:03.544435   42248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I1205 20:40:03.544472   42248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:40:11.465435   42248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:40:11.465466   42248 machine.go:91] provisioned docker machine in 8.492760559s
	I1205 20:40:11.465478   42248 start.go:300] post-start starting for "pause-405510" (driver="kvm2")
	I1205 20:40:11.465490   42248 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:40:11.465511   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:11.465851   42248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:40:11.465885   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:11.468947   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.469321   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:11.469357   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.469488   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:11.469717   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:11.469896   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:11.470076   42248 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/pause-405510/id_rsa Username:docker}
	I1205 20:40:11.564678   42248 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:40:11.569017   42248 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:40:11.569070   42248 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:40:11.569140   42248 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:40:11.569249   42248 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:40:11.569361   42248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:40:11.579320   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:40:11.604857   42248 start.go:303] post-start completed in 139.347622ms
	I1205 20:40:11.604882   42248 fix.go:56] fixHost completed within 8.657628894s
	I1205 20:40:11.604902   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:11.607337   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.607645   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:11.607672   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.607918   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:11.608157   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:11.608345   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:11.608491   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:11.608651   42248 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:11.608999   42248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I1205 20:40:11.609014   42248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:40:11.803520   42248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701808811.799052126
	
	I1205 20:40:11.803546   42248 fix.go:206] guest clock: 1701808811.799052126
	I1205 20:40:11.803553   42248 fix.go:219] Guest: 2023-12-05 20:40:11.799052126 +0000 UTC Remote: 2023-12-05 20:40:11.60488536 +0000 UTC m=+51.603837919 (delta=194.166766ms)
	I1205 20:40:11.803595   42248 fix.go:190] guest clock delta is within tolerance: 194.166766ms
	I1205 20:40:11.803599   42248 start.go:83] releasing machines lock for "pause-405510", held for 8.856379296s
	I1205 20:40:11.803626   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:11.803902   42248 main.go:141] libmachine: (pause-405510) Calling .GetIP
	I1205 20:40:11.806659   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.807057   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:11.807088   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.807242   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:11.807798   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:11.808016   42248 main.go:141] libmachine: (pause-405510) Calling .DriverName
	I1205 20:40:11.808104   42248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:40:11.808150   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:11.808265   42248 ssh_runner.go:195] Run: cat /version.json
	I1205 20:40:11.808293   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHHostname
	I1205 20:40:11.810926   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.810957   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.811352   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:11.811385   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.811416   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:11.811433   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:11.811515   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:11.811620   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHPort
	I1205 20:40:11.811700   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:11.811775   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHKeyPath
	I1205 20:40:11.811858   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:11.811936   42248 main.go:141] libmachine: (pause-405510) Calling .GetSSHUsername
	I1205 20:40:11.812060   42248 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/pause-405510/id_rsa Username:docker}
	I1205 20:40:11.812308   42248 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/pause-405510/id_rsa Username:docker}
	I1205 20:40:11.899503   42248 ssh_runner.go:195] Run: systemctl --version
	I1205 20:40:11.927853   42248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:40:12.132651   42248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:40:12.148859   42248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:40:12.148930   42248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:40:12.169347   42248 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:40:12.169375   42248 start.go:475] detecting cgroup driver to use...
	I1205 20:40:12.169442   42248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:40:12.332738   42248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:40:12.401706   42248 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:40:12.401786   42248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:40:12.433515   42248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:40:12.497508   42248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:40:12.806367   42248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:40:13.115392   42248 docker.go:219] disabling docker service ...
	I1205 20:40:13.115460   42248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:40:13.146294   42248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:40:13.195567   42248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:40:13.557218   42248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:40:13.971728   42248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:40:13.992305   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:40:14.022645   42248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:40:14.022722   42248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:14.038926   42248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:40:14.039011   42248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:14.054938   42248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:14.070573   42248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:14.087777   42248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:40:14.104165   42248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:40:14.118728   42248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:40:14.131167   42248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:40:14.445439   42248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:40:16.034142   42248 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.588662086s)
	I1205 20:40:16.034176   42248 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:40:16.034246   42248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:40:16.043620   42248 start.go:543] Will wait 60s for crictl version
	I1205 20:40:16.043695   42248 ssh_runner.go:195] Run: which crictl
	I1205 20:40:16.049336   42248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:40:16.109112   42248 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:40:16.109268   42248 ssh_runner.go:195] Run: crio --version
	I1205 20:40:16.170997   42248 ssh_runner.go:195] Run: crio --version
	I1205 20:40:16.247146   42248 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:40:16.248807   42248 main.go:141] libmachine: (pause-405510) Calling .GetIP
	I1205 20:40:16.252023   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:16.252493   42248 main.go:141] libmachine: (pause-405510) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:d6:6c", ip: ""} in network mk-pause-405510: {Iface:virbr4 ExpiryTime:2023-12-05 21:38:30 +0000 UTC Type:0 Mac:52:54:00:3c:d6:6c Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:pause-405510 Clientid:01:52:54:00:3c:d6:6c}
	I1205 20:40:16.252530   42248 main.go:141] libmachine: (pause-405510) DBG | domain pause-405510 has defined IP address 192.168.72.159 and MAC address 52:54:00:3c:d6:6c in network mk-pause-405510
	I1205 20:40:16.252746   42248 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:40:16.257920   42248 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:40:16.257983   42248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:40:16.319880   42248 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:40:16.319906   42248 crio.go:415] Images already preloaded, skipping extraction
	I1205 20:40:16.319957   42248 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:40:16.360503   42248 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:40:16.360528   42248 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:40:16.360619   42248 ssh_runner.go:195] Run: crio config
	I1205 20:40:16.439522   42248 cni.go:84] Creating CNI manager for ""
	I1205 20:40:16.439549   42248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:40:16.439571   42248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:40:16.439596   42248 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-405510 NodeName:pause-405510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:40:16.439752   42248 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-405510"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:40:16.439833   42248 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-405510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-405510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:40:16.439897   42248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:40:16.453701   42248 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:40:16.453794   42248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:40:16.466438   42248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1205 20:40:16.485140   42248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:40:16.504689   42248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1205 20:40:16.523946   42248 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I1205 20:40:16.528193   42248 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510 for IP: 192.168.72.159
	I1205 20:40:16.528240   42248 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:16.528386   42248 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:40:16.528437   42248 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:40:16.528511   42248 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/client.key
	I1205 20:40:16.528596   42248 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/apiserver.key.fc98a751
	I1205 20:40:16.528648   42248 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/proxy-client.key
	I1205 20:40:16.528778   42248 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:40:16.528801   42248 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:40:16.528809   42248 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:40:16.528831   42248 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:40:16.528849   42248 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:40:16.528870   42248 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:40:16.528908   42248 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:40:16.529468   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:40:16.561528   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:40:16.591598   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:40:16.620201   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:40:16.647238   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:40:16.675480   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:40:16.702596   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:40:16.729661   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:40:17.233416   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:40:17.304435   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:40:17.366457   42248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:40:17.414339   42248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:40:17.456866   42248 ssh_runner.go:195] Run: openssl version
	I1205 20:40:17.472463   42248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:40:17.495000   42248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:40:17.502756   42248 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:40:17.502832   42248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:40:17.511468   42248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:40:17.527224   42248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:40:17.544094   42248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:40:17.550999   42248 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:40:17.551069   42248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:40:17.560534   42248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:40:17.575986   42248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:40:17.593474   42248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:40:17.600933   42248 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:40:17.601003   42248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:40:17.609593   42248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:40:17.626028   42248 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:40:17.634419   42248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:40:17.644982   42248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:40:17.655442   42248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:40:17.665312   42248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:40:17.677258   42248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:40:17.687772   42248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:40:17.697885   42248 kubeadm.go:404] StartCluster: {Name:pause-405510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-405510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:40:17.698150   42248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:40:17.698214   42248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:40:17.766478   42248 cri.go:89] found id: "47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d"
	I1205 20:40:17.766509   42248 cri.go:89] found id: "be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01"
	I1205 20:40:17.766517   42248 cri.go:89] found id: "e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6"
	I1205 20:40:17.766524   42248 cri.go:89] found id: "bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823"
	I1205 20:40:17.766530   42248 cri.go:89] found id: "301a16c8eb8e46eb8b32a38da047f6fcb6ab4eab90670dcfadbc9deaaa60ded3"
	I1205 20:40:17.766537   42248 cri.go:89] found id: "00e4ba568fb1e6c28643568c8c4d7c68eff219a6436493a702ea26e21640096f"
	I1205 20:40:17.766544   42248 cri.go:89] found id: ""
	I1205 20:40:17.766630   42248 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-405510 -n pause-405510
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-405510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-405510 logs -n 25: (1.694314478s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo docker                         | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo find                           | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo crio                           | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-855101                                     | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:39 UTC |
	| start   | -p force-systemd-env-903631                          | force-systemd-env-903631  | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:40 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-405510                                      | pause-405510              | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:41 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-699600 ssh cat                    | force-systemd-flag-699600 | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-699600                         | force-systemd-flag-699600 | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:39 UTC |
	| start   | -p cert-options-525564                               | cert-options-525564       | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-601680                            | stopped-upgrade-601680    | jenkins | v1.32.0 | 05 Dec 23 20:40 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-903631                          | force-systemd-env-903631  | jenkins | v1.32.0 | 05 Dec 23 20:40 UTC | 05 Dec 23 20:40 UTC |
	| start   | -p cert-expiration-873953                            | cert-expiration-873953    | jenkins | v1.32.0 | 05 Dec 23 20:40 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:40:28
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:40:28.534493   43095 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:40:28.534641   43095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:40:28.534644   43095 out.go:309] Setting ErrFile to fd 2...
	I1205 20:40:28.534648   43095 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:40:28.534843   43095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:40:28.535425   43095 out.go:303] Setting JSON to false
	I1205 20:40:28.536392   43095 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4982,"bootTime":1701803847,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:40:28.536442   43095 start.go:138] virtualization: kvm guest
	I1205 20:40:28.538965   43095 out.go:177] * [cert-expiration-873953] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:40:28.540303   43095 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:40:28.540336   43095 notify.go:220] Checking for updates...
	I1205 20:40:28.541912   43095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:40:28.543389   43095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:40:28.544825   43095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:40:28.546427   43095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:40:28.547893   43095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:40:28.549678   43095 config.go:182] Loaded profile config "cert-options-525564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:40:28.549796   43095 config.go:182] Loaded profile config "pause-405510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:40:28.549859   43095 config.go:182] Loaded profile config "stopped-upgrade-601680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:40:28.549922   43095 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:40:28.586343   43095 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:40:28.587746   43095 start.go:298] selected driver: kvm2
	I1205 20:40:28.587752   43095 start.go:902] validating driver "kvm2" against <nil>
	I1205 20:40:28.587762   43095 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:40:28.588460   43095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:28.588523   43095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:40:28.603787   43095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:40:28.603850   43095 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 20:40:28.604051   43095 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:40:28.604093   43095 cni.go:84] Creating CNI manager for ""
	I1205 20:40:28.604101   43095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:40:28.604108   43095 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:40:28.604115   43095 start_flags.go:323] config:
	{Name:cert-expiration-873953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-873953 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:40:28.604230   43095 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:28.606064   43095 out.go:177] * Starting control plane node cert-expiration-873953 in cluster cert-expiration-873953
	I1205 20:40:24.354697   42939 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1205 20:40:24.381828   42939 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1205 20:40:24.382015   42939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/stopped-upgrade-601680/config.json ...
	I1205 20:40:24.382125   42939 cache.go:107] acquiring lock: {Name:mkc10e94e0d249944b279af624ed092b227942a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382173   42939 cache.go:107] acquiring lock: {Name:mkda0b4f34f825dfe4e0303962ddcaa304551402 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382200   42939 cache.go:107] acquiring lock: {Name:mk63236ff2284399ef1084f4868ac79057954ac7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382232   42939 cache.go:107] acquiring lock: {Name:mk7462b7dbcc5e2ca3994d68c9cee18f46803b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382247   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1205 20:40:24.382219   42939 cache.go:107] acquiring lock: {Name:mk9a0c91c42e5817c5aa4fb2cc05648531c85742 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382261   42939 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 145.7µs
	I1205 20:40:24.382534   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1205 20:40:24.382566   42939 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 336.072µs
	I1205 20:40:24.382579   42939 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1205 20:40:24.382599   42939 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1205 20:40:24.382609   42939 cache.go:107] acquiring lock: {Name:mk0522e775dff44d620aa8a2c87b50c7de8a7987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382290   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1205 20:40:24.382658   42939 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 464.716µs
	I1205 20:40:24.382673   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1205 20:40:24.382676   42939 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1205 20:40:24.382124   42939 cache.go:107] acquiring lock: {Name:mk28353803396fbe7c185ace9b7c2f728ad7e4f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382680   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1205 20:40:24.382683   42939 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 77.758µs
	I1205 20:40:24.382699   42939 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 525.494µs
	I1205 20:40:24.382707   42939 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1205 20:40:24.382710   42939 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1205 20:40:24.382696   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1205 20:40:24.382725   42939 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 531.847µs
	I1205 20:40:24.382736   42939 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1205 20:40:24.382720   42939 cache.go:107] acquiring lock: {Name:mk7098894afec2c26df02d2aad041391ca05429b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:40:24.382626   42939 start.go:365] acquiring machines lock for stopped-upgrade-601680: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:40:24.383135   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1205 20:40:24.383162   42939 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 447.363µs
	I1205 20:40:24.383172   42939 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1205 20:40:24.383203   42939 cache.go:115] /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 20:40:24.383209   42939 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.100355ms
	I1205 20:40:24.383218   42939 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 20:40:24.383225   42939 cache.go:87] Successfully saved all images to host disk.
	I1205 20:40:25.511734   42248 api_server.go:166] Checking apiserver status ...
	I1205 20:40:25.511840   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:40:25.528384   42248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:26.011742   42248 api_server.go:166] Checking apiserver status ...
	I1205 20:40:26.011818   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:40:26.023217   42248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:26.512617   42248 api_server.go:166] Checking apiserver status ...
	I1205 20:40:26.512746   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:40:26.528371   42248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:27.011814   42248 api_server.go:166] Checking apiserver status ...
	I1205 20:40:27.011892   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:40:27.027632   42248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:27.512245   42248 api_server.go:166] Checking apiserver status ...
	I1205 20:40:27.512364   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:40:27.526966   42248 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:27.962651   42248 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:40:27.962697   42248 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:40:27.962708   42248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:40:27.962771   42248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:40:28.026775   42248 cri.go:89] found id: "c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c"
	I1205 20:40:28.026803   42248 cri.go:89] found id: "88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82"
	I1205 20:40:28.026811   42248 cri.go:89] found id: "a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223"
	I1205 20:40:28.026817   42248 cri.go:89] found id: "47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d"
	I1205 20:40:28.026823   42248 cri.go:89] found id: "be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01"
	I1205 20:40:28.026829   42248 cri.go:89] found id: "e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6"
	I1205 20:40:28.026839   42248 cri.go:89] found id: "bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823"
	I1205 20:40:28.026844   42248 cri.go:89] found id: "301a16c8eb8e46eb8b32a38da047f6fcb6ab4eab90670dcfadbc9deaaa60ded3"
	I1205 20:40:28.026849   42248 cri.go:89] found id: "00e4ba568fb1e6c28643568c8c4d7c68eff219a6436493a702ea26e21640096f"
	I1205 20:40:28.026859   42248 cri.go:89] found id: ""
	I1205 20:40:28.026872   42248 cri.go:234] Stopping containers: [c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c 88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82 a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223 47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01 e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6 bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823 301a16c8eb8e46eb8b32a38da047f6fcb6ab4eab90670dcfadbc9deaaa60ded3 00e4ba568fb1e6c28643568c8c4d7c68eff219a6436493a702ea26e21640096f]
	I1205 20:40:28.026945   42248 ssh_runner.go:195] Run: which crictl
	I1205 20:40:28.031416   42248 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c 88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82 a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223 47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01 e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6 bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823 301a16c8eb8e46eb8b32a38da047f6fcb6ab4eab90670dcfadbc9deaaa60ded3 00e4ba568fb1e6c28643568c8c4d7c68eff219a6436493a702ea26e21640096f
	I1205 20:40:28.776621   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:28.777022   42693 main.go:141] libmachine: (cert-options-525564) DBG | unable to find current IP address of domain cert-options-525564 in network mk-cert-options-525564
	I1205 20:40:28.777037   42693 main.go:141] libmachine: (cert-options-525564) DBG | I1205 20:40:28.776984   42803 retry.go:31] will retry after 3.312243542s: waiting for machine to come up
	I1205 20:40:32.091345   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:32.091737   42693 main.go:141] libmachine: (cert-options-525564) DBG | unable to find current IP address of domain cert-options-525564 in network mk-cert-options-525564
	I1205 20:40:32.091770   42693 main.go:141] libmachine: (cert-options-525564) DBG | I1205 20:40:32.091692   42803 retry.go:31] will retry after 4.902240099s: waiting for machine to come up
	I1205 20:40:28.607396   43095 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:40:28.607428   43095 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:40:28.607439   43095 cache.go:56] Caching tarball of preloaded images
	I1205 20:40:28.607505   43095 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:40:28.607511   43095 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:40:28.607605   43095 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-expiration-873953/config.json ...
	I1205 20:40:28.607617   43095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-expiration-873953/config.json: {Name:mk351797b200767a13308c72369ecb0d16d2c15f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:28.607749   43095 start.go:365] acquiring machines lock for cert-expiration-873953: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:40:33.506310   42248 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c 88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82 a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223 47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01 e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6 bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823 301a16c8eb8e46eb8b32a38da047f6fcb6ab4eab90670dcfadbc9deaaa60ded3 00e4ba568fb1e6c28643568c8c4d7c68eff219a6436493a702ea26e21640096f: (5.474825116s)
	W1205 20:40:33.506393   42248 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c 88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82 a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223 47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01 e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6 bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823 301a16c8eb8e46eb8b32a38da047f6fcb6ab4eab90670dcfadbc9deaaa60ded3 00e4ba568fb1e6c28643568c8c4d7c68eff219a6436493a702ea26e21640096f: Process exited with status 1
	stdout:
	c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c
	88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82
	a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223
	47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d
	be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01
	e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6
	
	stderr:
	E1205 20:40:33.501630    3074 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823\": container with ID starting with bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823 not found: ID does not exist" containerID="bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823"
	time="2023-12-05T20:40:33Z" level=fatal msg="stopping the container \"bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823\": rpc error: code = NotFound desc = could not find container \"bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823\": container with ID starting with bd47155fe73546f9f1ef05380847637a61693fb6bf84055e5f0c1315b05b0823 not found: ID does not exist"
	I1205 20:40:33.506473   42248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:40:33.547715   42248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:40:33.558289   42248 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Dec  5 20:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Dec  5 20:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Dec  5 20:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec  5 20:38 /etc/kubernetes/scheduler.conf
	
	I1205 20:40:33.558384   42248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:40:33.568059   42248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:40:33.577917   42248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:40:33.588113   42248 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:33.588190   42248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:40:33.598140   42248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:40:33.607530   42248 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:40:33.607601   42248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:40:33.617444   42248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:40:33.627750   42248 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:40:33.627785   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:40:33.704877   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:40:34.591531   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:40:34.831912   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:40:34.907512   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:40:34.977229   42248 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:40:34.977330   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:40:35.024412   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:40:36.995694   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:36.996125   42693 main.go:141] libmachine: (cert-options-525564) Found IP for machine: 192.168.39.95
	I1205 20:40:36.996138   42693 main.go:141] libmachine: (cert-options-525564) Reserving static IP address...
	I1205 20:40:36.996152   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has current primary IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:36.996596   42693 main.go:141] libmachine: (cert-options-525564) DBG | unable to find host DHCP lease matching {name: "cert-options-525564", mac: "52:54:00:ab:b7:21", ip: "192.168.39.95"} in network mk-cert-options-525564
	I1205 20:40:37.079490   42693 main.go:141] libmachine: (cert-options-525564) DBG | Getting to WaitForSSH function...
	I1205 20:40:37.079509   42693 main.go:141] libmachine: (cert-options-525564) Reserved static IP address: 192.168.39.95
	I1205 20:40:37.079521   42693 main.go:141] libmachine: (cert-options-525564) Waiting for SSH to be available...
	I1205 20:40:37.082219   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.082777   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.082803   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.082957   42693 main.go:141] libmachine: (cert-options-525564) DBG | Using SSH client type: external
	I1205 20:40:37.082976   42693 main.go:141] libmachine: (cert-options-525564) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa (-rw-------)
	I1205 20:40:37.083008   42693 main.go:141] libmachine: (cert-options-525564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:40:37.083017   42693 main.go:141] libmachine: (cert-options-525564) DBG | About to run SSH command:
	I1205 20:40:37.083030   42693 main.go:141] libmachine: (cert-options-525564) DBG | exit 0
	I1205 20:40:37.226611   42693 main.go:141] libmachine: (cert-options-525564) DBG | SSH cmd err, output: <nil>: 
	I1205 20:40:37.226909   42693 main.go:141] libmachine: (cert-options-525564) KVM machine creation complete!
	I1205 20:40:37.227336   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetConfigRaw
	I1205 20:40:37.228031   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:37.228241   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:37.228415   42693 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:40:37.228427   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetState
	I1205 20:40:37.229948   42693 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:40:37.229957   42693 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:40:37.229961   42693 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:40:37.229975   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:37.232683   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.233064   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.233093   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.233237   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:37.233442   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.233629   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.233772   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:37.233974   42693 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:37.234499   42693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1205 20:40:37.234508   42693 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:40:38.687271   42939 start.go:369] acquired machines lock for "stopped-upgrade-601680" in 14.304459763s
	I1205 20:40:38.687332   42939 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:40:38.687340   42939 fix.go:54] fixHost starting: minikube
	I1205 20:40:38.687760   42939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:38.687801   42939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:38.704694   42939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1205 20:40:38.705105   42939 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:38.705564   42939 main.go:141] libmachine: Using API Version  1
	I1205 20:40:38.705584   42939 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:38.705963   42939 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:38.706155   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	I1205 20:40:38.706312   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .GetState
	I1205 20:40:38.708010   42939 fix.go:102] recreateIfNeeded on stopped-upgrade-601680: state=Stopped err=<nil>
	I1205 20:40:38.708044   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .DriverName
	W1205 20:40:38.708238   42939 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:40:38.711305   42939 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-601680" ...
	I1205 20:40:38.712925   42939 main.go:141] libmachine: (stopped-upgrade-601680) Calling .Start
	I1205 20:40:38.713119   42939 main.go:141] libmachine: (stopped-upgrade-601680) Ensuring networks are active...
	I1205 20:40:38.713937   42939 main.go:141] libmachine: (stopped-upgrade-601680) Ensuring network default is active
	I1205 20:40:38.714287   42939 main.go:141] libmachine: (stopped-upgrade-601680) Ensuring network minikube-net is active
	I1205 20:40:38.714673   42939 main.go:141] libmachine: (stopped-upgrade-601680) Getting domain xml...
	I1205 20:40:38.715360   42939 main.go:141] libmachine: (stopped-upgrade-601680) Creating domain...
	I1205 20:40:35.541852   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:40:36.041131   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:40:36.542136   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:40:37.041296   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:40:37.070192   42248 api_server.go:72] duration metric: took 2.09296447s to wait for apiserver process to appear ...
	I1205 20:40:37.070222   42248 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:40:37.070247   42248 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I1205 20:40:37.370186   42693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:40:37.370201   42693 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:40:37.370209   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:37.372906   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.373300   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.373325   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.373437   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:37.373630   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.373796   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.373885   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:37.374069   42693 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:37.374401   42693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1205 20:40:37.374408   42693 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:40:37.506972   42693 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gf888a99-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1205 20:40:37.507030   42693 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:40:37.507038   42693 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:40:37.507047   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetMachineName
	I1205 20:40:37.507321   42693 buildroot.go:166] provisioning hostname "cert-options-525564"
	I1205 20:40:37.507336   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetMachineName
	I1205 20:40:37.507527   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:37.510430   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.510747   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.510781   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.510905   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:37.511124   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.511318   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.511459   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:37.511663   42693 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:37.512175   42693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1205 20:40:37.512193   42693 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-525564 && echo "cert-options-525564" | sudo tee /etc/hostname
	I1205 20:40:37.662849   42693 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-525564
	
	I1205 20:40:37.662871   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:37.665901   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.666297   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.666314   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.666508   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:37.666691   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.666862   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.667045   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:37.667197   42693 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:37.667486   42693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1205 20:40:37.667497   42693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-525564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-525564/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-525564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:40:37.807412   42693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:40:37.807433   42693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:40:37.807471   42693 buildroot.go:174] setting up certificates
	I1205 20:40:37.807480   42693 provision.go:83] configureAuth start
	I1205 20:40:37.807489   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetMachineName
	I1205 20:40:37.807732   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetIP
	I1205 20:40:37.810929   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.811826   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.811872   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.812178   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:37.814596   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.814916   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.814939   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.815069   42693 provision.go:138] copyHostCerts
	I1205 20:40:37.815136   42693 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:40:37.815151   42693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:40:37.815220   42693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:40:37.815311   42693 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:40:37.815315   42693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:40:37.815365   42693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:40:37.815418   42693 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:40:37.815421   42693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:40:37.815442   42693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:40:37.815520   42693 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.cert-options-525564 san=[192.168.39.95 192.168.39.95 localhost 127.0.0.1 minikube cert-options-525564]
	I1205 20:40:37.904319   42693 provision.go:172] copyRemoteCerts
	I1205 20:40:37.904365   42693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:40:37.904392   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:37.907069   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.907477   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:37.907513   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:37.907731   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:37.907918   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:37.908061   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:37.908207   42693 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa Username:docker}
	I1205 20:40:38.003803   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:40:38.027994   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:40:38.052373   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:40:38.076545   42693 provision.go:86] duration metric: configureAuth took 269.055027ms
	I1205 20:40:38.076566   42693 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:40:38.076745   42693 config.go:182] Loaded profile config "cert-options-525564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:40:38.076851   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:38.079702   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.080102   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.080126   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.080291   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:38.080481   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.080630   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.080772   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:38.080925   42693 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:38.081329   42693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1205 20:40:38.081342   42693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:40:38.406830   42693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:40:38.406850   42693 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:40:38.406861   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetURL
	I1205 20:40:38.408346   42693 main.go:141] libmachine: (cert-options-525564) DBG | Using libvirt version 6000000
	I1205 20:40:38.410973   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.411323   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.411357   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.411559   42693 main.go:141] libmachine: Docker is up and running!
	I1205 20:40:38.411565   42693 main.go:141] libmachine: Reticulating splines...
	I1205 20:40:38.411583   42693 client.go:171] LocalClient.Create took 26.496544832s
	I1205 20:40:38.411605   42693 start.go:167] duration metric: libmachine.API.Create for "cert-options-525564" took 26.496599923s
	I1205 20:40:38.411613   42693 start.go:300] post-start starting for "cert-options-525564" (driver="kvm2")
	I1205 20:40:38.411624   42693 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:40:38.411638   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:38.411896   42693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:40:38.411921   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:38.414208   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.414557   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.414590   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.414721   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:38.414880   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.415001   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:38.415161   42693 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa Username:docker}
	I1205 20:40:38.512367   42693 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:40:38.516422   42693 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:40:38.516438   42693 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:40:38.516509   42693 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:40:38.516597   42693 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:40:38.516685   42693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:40:38.525790   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:40:38.547976   42693 start.go:303] post-start completed in 136.351131ms
	I1205 20:40:38.548009   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetConfigRaw
	I1205 20:40:38.548567   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetIP
	I1205 20:40:38.551314   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.551635   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.551675   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.551868   42693 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/config.json ...
	I1205 20:40:38.552061   42693 start.go:128] duration metric: createHost completed in 26.748196653s
	I1205 20:40:38.552077   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:38.554359   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.554689   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.554714   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.554827   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:38.555017   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.555157   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.555324   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:38.555519   42693 main.go:141] libmachine: Using SSH client type: native
	I1205 20:40:38.556033   42693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1205 20:40:38.556042   42693 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:40:38.687114   42693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701808838.668687449
	
	I1205 20:40:38.687128   42693 fix.go:206] guest clock: 1701808838.668687449
	I1205 20:40:38.687136   42693 fix.go:219] Guest: 2023-12-05 20:40:38.668687449 +0000 UTC Remote: 2023-12-05 20:40:38.552067862 +0000 UTC m=+41.255696625 (delta=116.619587ms)
	I1205 20:40:38.687183   42693 fix.go:190] guest clock delta is within tolerance: 116.619587ms
	I1205 20:40:38.687188   42693 start.go:83] releasing machines lock for "cert-options-525564", held for 26.883475309s
	I1205 20:40:38.687223   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:38.687495   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetIP
	I1205 20:40:38.690607   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.690994   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.691013   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.691184   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:38.691719   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:38.691886   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:38.691959   42693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:40:38.692000   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:38.692073   42693 ssh_runner.go:195] Run: cat /version.json
	I1205 20:40:38.692084   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:38.694842   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.695063   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.695159   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.695179   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.695324   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:38.695475   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.695479   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:38.695501   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:38.695587   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:38.695663   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:38.695809   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:38.695806   42693 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa Username:docker}
	I1205 20:40:38.695977   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:38.696124   42693 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa Username:docker}
	I1205 20:40:38.787185   42693 ssh_runner.go:195] Run: systemctl --version
	I1205 20:40:38.818170   42693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:40:38.982865   42693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:40:38.990847   42693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:40:38.990901   42693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:40:39.006646   42693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:40:39.006659   42693 start.go:475] detecting cgroup driver to use...
	I1205 20:40:39.006722   42693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:40:39.024760   42693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:40:39.037778   42693 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:40:39.037816   42693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:40:39.050820   42693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:40:39.063874   42693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:40:39.186157   42693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:40:39.317748   42693 docker.go:219] disabling docker service ...
	I1205 20:40:39.317807   42693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:40:39.331517   42693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:40:39.345734   42693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:40:39.463702   42693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:40:39.594656   42693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:40:39.608078   42693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:40:39.625233   42693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:40:39.625287   42693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:39.634780   42693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:40:39.634828   42693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:39.644168   42693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:39.653870   42693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:40:39.663492   42693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:40:39.672769   42693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:40:39.680717   42693 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:40:39.680760   42693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:40:39.694618   42693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:40:39.704043   42693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:40:39.861391   42693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:40:40.041486   42693 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:40:40.041557   42693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:40:40.048202   42693 start.go:543] Will wait 60s for crictl version
	I1205 20:40:40.048256   42693 ssh_runner.go:195] Run: which crictl
	I1205 20:40:40.053174   42693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:40:40.096220   42693 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:40:40.096299   42693 ssh_runner.go:195] Run: crio --version
	I1205 20:40:40.146173   42693 ssh_runner.go:195] Run: crio --version
	I1205 20:40:40.209763   42693 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:40:40.211119   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetIP
	I1205 20:40:40.214068   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:40.214606   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:40.214629   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:40.214838   42693 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:40:40.219324   42693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:40:40.235378   42693 localpath.go:92] copying /home/jenkins/minikube-integration/17731-6237/.minikube/client.crt -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/client.crt
	I1205 20:40:40.235487   42693 localpath.go:117] copying /home/jenkins/minikube-integration/17731-6237/.minikube/client.key -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/client.key
	I1205 20:40:40.235579   42693 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:40:40.235611   42693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:40:40.284694   42693 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:40:40.284767   42693 ssh_runner.go:195] Run: which lz4
	I1205 20:40:40.289193   42693 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:40:40.294213   42693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:40:40.294238   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:40:42.195017   42693 crio.go:444] Took 1.905885 seconds to copy over tarball
	I1205 20:40:42.195076   42693 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:40:41.765725   42248 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:40:41.765767   42248 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:40:41.765784   42248 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I1205 20:40:41.833664   42248 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:40:41.833697   42248 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:40:42.334350   42248 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I1205 20:40:42.362946   42248 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:40:42.362995   42248 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:40:42.834597   42248 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I1205 20:40:42.842631   42248 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:40:42.842661   42248 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:40:43.334848   42248 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I1205 20:40:43.340669   42248 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I1205 20:40:43.350381   42248 api_server.go:141] control plane version: v1.28.4
	I1205 20:40:43.350415   42248 api_server.go:131] duration metric: took 6.280184977s to wait for apiserver health ...
	I1205 20:40:43.350426   42248 cni.go:84] Creating CNI manager for ""
	I1205 20:40:43.350435   42248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:40:43.427278   42248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:40:40.046007   42939 main.go:141] libmachine: (stopped-upgrade-601680) Waiting to get IP...
	I1205 20:40:40.047125   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.047603   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.047721   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.047580   43173 retry.go:31] will retry after 241.913322ms: waiting for machine to come up
	I1205 20:40:40.291365   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.291861   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.291914   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.291830   43173 retry.go:31] will retry after 261.894859ms: waiting for machine to come up
	I1205 20:40:40.555507   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.556151   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.556287   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.556239   43173 retry.go:31] will retry after 372.036636ms: waiting for machine to come up
	I1205 20:40:40.930108   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:40.930945   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:40.930973   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:40.930837   43173 retry.go:31] will retry after 446.265845ms: waiting for machine to come up
	I1205 20:40:41.378530   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:41.379031   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:41.379093   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:41.378993   43173 retry.go:31] will retry after 661.612365ms: waiting for machine to come up
	I1205 20:40:42.041832   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:42.042668   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:42.042692   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:42.042589   43173 retry.go:31] will retry after 730.768928ms: waiting for machine to come up
	I1205 20:40:42.774829   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:42.775400   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:42.775443   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:42.775344   43173 retry.go:31] will retry after 1.118611444s: waiting for machine to come up
	I1205 20:40:43.895862   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:43.896354   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:43.896387   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:43.896292   43173 retry.go:31] will retry after 980.173523ms: waiting for machine to come up
	I1205 20:40:43.447417   42248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:40:43.465338   42248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:40:43.491420   42248 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:40:43.542396   42248 system_pods.go:59] 6 kube-system pods found
	I1205 20:40:43.544210   42248 system_pods.go:61] "coredns-5dd5756b68-9pnrl" [97a39919-30dd-4eac-ba0e-84bf38fc72eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:40:43.544252   42248 system_pods.go:61] "etcd-pause-405510" [878e029c-7fbe-478c-91d7-aad746be989c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:40:43.544280   42248 system_pods.go:61] "kube-apiserver-pause-405510" [7c5a406d-f71a-4fff-82c8-1db31277d759] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:40:43.544310   42248 system_pods.go:61] "kube-controller-manager-pause-405510" [b1ff8f7e-473e-4529-94e0-76cf4c6f43c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:40:43.544331   42248 system_pods.go:61] "kube-proxy-kc59g" [797c4268-91d2-4278-8c00-319257a312cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:40:43.544355   42248 system_pods.go:61] "kube-scheduler-pause-405510" [0f362753-b2d5-4dec-ab2a-44cf2efa93f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:40:43.544364   42248 system_pods.go:74] duration metric: took 52.919333ms to wait for pod list to return data ...
	I1205 20:40:43.544377   42248 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:40:43.989484   42248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:40:43.989575   42248 node_conditions.go:123] node cpu capacity is 2
	I1205 20:40:43.989608   42248 node_conditions.go:105] duration metric: took 445.217223ms to run NodePressure ...
	I1205 20:40:43.989640   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:40:44.814896   42248 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:40:44.826959   42248 kubeadm.go:787] kubelet initialised
	I1205 20:40:44.826990   42248 kubeadm.go:788] duration metric: took 12.059446ms waiting for restarted kubelet to initialise ...
	I1205 20:40:44.826999   42248 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:40:44.834649   42248 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9pnrl" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:45.436797   42693 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.241695192s)
	I1205 20:40:45.436815   42693 crio.go:451] Took 3.241781 seconds to extract the tarball
	I1205 20:40:45.436824   42693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:40:45.479061   42693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:40:45.571947   42693 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:40:45.571961   42693 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:40:45.572049   42693 ssh_runner.go:195] Run: crio config
	I1205 20:40:45.641690   42693 cni.go:84] Creating CNI manager for ""
	I1205 20:40:45.641705   42693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:40:45.641727   42693 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:40:45.641748   42693 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8555 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-525564 NodeName:cert-options-525564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:40:45.641920   42693 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-525564"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:40:45.641998   42693 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=cert-options-525564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:cert-options-525564 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:}
	I1205 20:40:45.642064   42693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:40:45.651638   42693 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:40:45.651736   42693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:40:45.660348   42693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:40:45.676726   42693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:40:45.694306   42693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1205 20:40:45.712119   42693 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I1205 20:40:45.716398   42693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:40:45.729314   42693 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564 for IP: 192.168.39.95
	I1205 20:40:45.729339   42693 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:45.729515   42693 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:40:45.729565   42693 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:40:45.729671   42693 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/client.key
	I1205 20:40:45.729691   42693 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.key.d5233873
	I1205 20:40:45.729703   42693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.crt.d5233873 with IP's: [127.0.0.1 192.168.15.15 192.168.39.95 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 20:40:45.791882   42693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.crt.d5233873 ...
	I1205 20:40:45.791896   42693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.crt.d5233873: {Name:mk4b196621cb4a1d570ee8345ca4cc305474b3d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:45.792056   42693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.key.d5233873 ...
	I1205 20:40:45.792063   42693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.key.d5233873: {Name:mk74039bfa6d0b9dafa876ca44d1d8323b5fd9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:45.792129   42693 certs.go:337] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.crt.d5233873 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.crt
	I1205 20:40:45.792185   42693 certs.go:341] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.key.d5233873 -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.key
	I1205 20:40:45.792225   42693 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.key
	I1205 20:40:45.792234   42693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.crt with IP's: []
	I1205 20:40:46.001725   42693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.crt ...
	I1205 20:40:46.001742   42693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.crt: {Name:mkf996277bc8bba31233edc35f911151c7111505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:46.027826   42693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.key ...
	I1205 20:40:46.027858   42693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.key: {Name:mk168c38e307141dd02d57af3c23819d5982bd9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:46.028047   42693 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:40:46.028082   42693 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:40:46.028092   42693 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:40:46.028127   42693 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:40:46.028155   42693 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:40:46.028173   42693 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:40:46.028210   42693 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:40:46.029123   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1452 bytes)
	I1205 20:40:46.056779   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:40:46.083242   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:40:46.176100   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/cert-options-525564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:40:46.205091   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:40:46.229787   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:40:46.255349   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:40:46.281669   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:40:46.307713   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:40:46.333640   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:40:46.359809   42693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:40:46.386955   42693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:40:46.403903   42693 ssh_runner.go:195] Run: openssl version
	I1205 20:40:46.410140   42693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:40:46.422979   42693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:40:46.428134   42693 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:40:46.428191   42693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:40:46.434052   42693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:40:46.445545   42693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:40:46.456469   42693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:40:46.461336   42693 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:40:46.461395   42693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:40:46.467460   42693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:40:46.478606   42693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:40:46.490185   42693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:40:46.495447   42693 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:40:46.495493   42693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:40:46.501576   42693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:40:46.514342   42693 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:40:46.518875   42693 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 20:40:46.518922   42693 kubeadm.go:404] StartCluster: {Name:cert-options-525564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.4 ClusterName:cert-options-525564 Namespace:default APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP:192.168.39.95 Port:8555 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:40:46.518991   42693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:40:46.519057   42693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:40:46.563248   42693 cri.go:89] found id: ""
	I1205 20:40:46.563312   42693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:40:46.573596   42693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:40:46.583694   42693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:40:46.595424   42693 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:40:46.595462   42693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:40:46.704646   42693 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:40:46.704882   42693 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:40:46.949451   42693 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:40:46.949590   42693 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:40:46.949737   42693 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:40:47.201572   42693 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:40:47.204670   42693 out.go:204]   - Generating certificates and keys ...
	I1205 20:40:47.204812   42693 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:40:47.204885   42693 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:40:47.324447   42693 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:40:44.878602   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:44.879167   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:44.879200   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:44.879127   43173 retry.go:31] will retry after 1.385044864s: waiting for machine to come up
	I1205 20:40:46.265240   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:46.265670   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:46.265700   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:46.265623   43173 retry.go:31] will retry after 1.740287111s: waiting for machine to come up
	I1205 20:40:48.008178   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:48.008727   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:48.008757   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:48.008671   43173 retry.go:31] will retry after 1.931230772s: waiting for machine to come up
	I1205 20:40:47.578195   42693 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:40:47.678159   42693 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:40:47.923280   42693 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 20:40:48.079230   42693 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 20:40:48.079435   42693 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-options-525564 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I1205 20:40:48.243628   42693 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 20:40:48.243811   42693 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-options-525564 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I1205 20:40:48.291698   42693 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:40:48.446144   42693 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:40:48.719993   42693 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 20:40:48.720099   42693 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:40:48.944507   42693 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:40:49.183396   42693 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:40:49.353615   42693 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:40:49.494137   42693 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:40:49.494649   42693 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:40:49.499863   42693 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:40:46.179403   42248 pod_ready.go:92] pod "coredns-5dd5756b68-9pnrl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:46.179434   42248 pod_ready.go:81] duration metric: took 1.344755184s waiting for pod "coredns-5dd5756b68-9pnrl" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:46.179448   42248 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:48.201015   42248 pod_ready.go:102] pod "etcd-pause-405510" in "kube-system" namespace has status "Ready":"False"
	I1205 20:40:49.503330   42693 out.go:204]   - Booting up control plane ...
	I1205 20:40:49.503504   42693 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:40:49.503594   42693 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:40:49.503651   42693 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:40:49.518341   42693 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:40:49.519151   42693 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:40:49.519218   42693 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:40:49.665824   42693 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:40:49.942075   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:49.942587   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:49.942618   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:49.942527   43173 retry.go:31] will retry after 3.401653628s: waiting for machine to come up
	I1205 20:40:53.348042   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:53.348595   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:53.348624   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:53.348542   43173 retry.go:31] will retry after 3.558160616s: waiting for machine to come up
	I1205 20:40:50.701027   42248 pod_ready.go:102] pod "etcd-pause-405510" in "kube-system" namespace has status "Ready":"False"
	I1205 20:40:52.701137   42248 pod_ready.go:102] pod "etcd-pause-405510" in "kube-system" namespace has status "Ready":"False"
	I1205 20:40:54.701960   42248 pod_ready.go:102] pod "etcd-pause-405510" in "kube-system" namespace has status "Ready":"False"
	I1205 20:40:57.201183   42248 pod_ready.go:92] pod "etcd-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:57.201208   42248 pod_ready.go:81] duration metric: took 11.021753248s waiting for pod "etcd-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.201219   42248 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.208190   42248 pod_ready.go:92] pod "kube-apiserver-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:57.208215   42248 pod_ready.go:81] duration metric: took 6.988815ms waiting for pod "kube-apiserver-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.208228   42248 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.214533   42248 pod_ready.go:92] pod "kube-controller-manager-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:57.214561   42248 pod_ready.go:81] duration metric: took 6.315919ms waiting for pod "kube-controller-manager-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.214582   42248 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kc59g" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.221187   42248 pod_ready.go:92] pod "kube-proxy-kc59g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:57.221210   42248 pod_ready.go:81] duration metric: took 6.619046ms waiting for pod "kube-proxy-kc59g" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.221222   42248 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.230623   42248 pod_ready.go:92] pod "kube-scheduler-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:57.230645   42248 pod_ready.go:81] duration metric: took 9.415807ms waiting for pod "kube-scheduler-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.230654   42248 pod_ready.go:38] duration metric: took 12.40364473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:40:57.230682   42248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:40:57.248138   42248 ops.go:34] apiserver oom_adj: -16
	I1205 20:40:57.248163   42248 kubeadm.go:640] restartCluster took 39.336649496s
	I1205 20:40:57.248173   42248 kubeadm.go:406] StartCluster complete in 39.550299222s
	I1205 20:40:57.248191   42248 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:57.248272   42248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:40:57.249162   42248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:57.249420   42248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:40:57.249749   42248 config.go:182] Loaded profile config "pause-405510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:40:57.249791   42248 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:40:57.251744   42248 out.go:177] * Enabled addons: 
	I1205 20:40:57.250019   42248 kapi.go:59] client config for pause-405510: &rest.Config{Host:"https://192.168.72.159:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/client.crt", KeyFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/client.key", CAFile:"/home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c259c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:40:57.253870   42248 addons.go:502] enable addons completed in 4.076758ms: enabled=[]
	I1205 20:40:57.258469   42248 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-405510" context rescaled to 1 replicas
	I1205 20:40:57.258510   42248 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:40:57.259984   42248 out.go:177] * Verifying Kubernetes components...
	I1205 20:40:57.671483   42693 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.007487 seconds
	I1205 20:40:57.671626   42693 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:40:57.690873   42693 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:40:58.225627   42693 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:40:58.225908   42693 kubeadm.go:322] [mark-control-plane] Marking the node cert-options-525564 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:40:58.741709   42693 kubeadm.go:322] [bootstrap-token] Using token: nhw5lq.h7sv8k9yfj2cobt7
	I1205 20:40:58.743327   42693 out.go:204]   - Configuring RBAC rules ...
	I1205 20:40:58.743465   42693 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:40:58.749708   42693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:40:58.758827   42693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:40:58.766988   42693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:40:58.774883   42693 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:40:58.779737   42693 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:40:58.801695   42693 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:40:59.113871   42693 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:40:59.174225   42693 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:40:59.175553   42693 kubeadm.go:322] 
	I1205 20:40:59.175629   42693 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:40:59.175635   42693 kubeadm.go:322] 
	I1205 20:40:59.175737   42693 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:40:59.175742   42693 kubeadm.go:322] 
	I1205 20:40:59.175774   42693 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:40:59.175853   42693 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:40:59.175912   42693 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:40:59.175918   42693 kubeadm.go:322] 
	I1205 20:40:59.175981   42693 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:40:59.175986   42693 kubeadm.go:322] 
	I1205 20:40:59.176075   42693 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:40:59.176080   42693 kubeadm.go:322] 
	I1205 20:40:59.176163   42693 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:40:59.176257   42693 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:40:59.176349   42693 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:40:59.176359   42693 kubeadm.go:322] 
	I1205 20:40:59.176463   42693 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:40:59.176561   42693 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:40:59.176567   42693 kubeadm.go:322] 
	I1205 20:40:59.176673   42693 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8555 --token nhw5lq.h7sv8k9yfj2cobt7 \
	I1205 20:40:59.176805   42693 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:40:59.176856   42693 kubeadm.go:322] 	--control-plane 
	I1205 20:40:59.176864   42693 kubeadm.go:322] 
	I1205 20:40:59.176972   42693 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:40:59.176978   42693 kubeadm.go:322] 
	I1205 20:40:59.177101   42693 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8555 --token nhw5lq.h7sv8k9yfj2cobt7 \
	I1205 20:40:59.177219   42693 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:40:59.177551   42693 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:40:59.177574   42693 cni.go:84] Creating CNI manager for ""
	I1205 20:40:59.177584   42693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:40:59.179523   42693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:40:56.907835   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | domain stopped-upgrade-601680 has defined MAC address 52:54:00:93:33:e4 in network minikube-net
	I1205 20:40:56.908361   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | unable to find current IP address of domain stopped-upgrade-601680 in network minikube-net
	I1205 20:40:56.908392   42939 main.go:141] libmachine: (stopped-upgrade-601680) DBG | I1205 20:40:56.908307   43173 retry.go:31] will retry after 5.657646006s: waiting for machine to come up
	I1205 20:40:59.180991   42693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:40:59.191696   42693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:40:59.248003   42693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:40:59.248134   42693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:40:59.248155   42693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=cert-options-525564 minikube.k8s.io/updated_at=2023_12_05T20_40_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:40:59.292670   42693 ops.go:34] apiserver oom_adj: -16
	I1205 20:40:59.660245   42693 kubeadm.go:1088] duration metric: took 412.177708ms to wait for elevateKubeSystemPrivileges.
	I1205 20:40:59.660269   42693 kubeadm.go:406] StartCluster complete in 13.141349146s
	I1205 20:40:59.660286   42693 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:59.660357   42693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:40:59.661721   42693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:40:59.661980   42693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:40:59.662102   42693 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:40:59.662171   42693 addons.go:69] Setting storage-provisioner=true in profile "cert-options-525564"
	I1205 20:40:59.662178   42693 addons.go:69] Setting default-storageclass=true in profile "cert-options-525564"
	I1205 20:40:59.662191   42693 addons.go:231] Setting addon storage-provisioner=true in "cert-options-525564"
	I1205 20:40:59.662198   42693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-525564"
	I1205 20:40:59.662243   42693 config.go:182] Loaded profile config "cert-options-525564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:40:59.662290   42693 host.go:66] Checking if "cert-options-525564" exists ...
	I1205 20:40:59.662770   42693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:59.662804   42693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:59.662812   42693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:59.662826   42693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:59.678709   42693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I1205 20:40:59.679272   42693 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:59.679838   42693 main.go:141] libmachine: Using API Version  1
	I1205 20:40:59.679848   42693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:59.680499   42693 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:59.680686   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetState
	I1205 20:40:59.681348   42693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I1205 20:40:59.681693   42693 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:59.682136   42693 main.go:141] libmachine: Using API Version  1
	I1205 20:40:59.682157   42693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:59.682526   42693 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:59.683567   42693 addons.go:231] Setting addon default-storageclass=true in "cert-options-525564"
	I1205 20:40:59.683594   42693 host.go:66] Checking if "cert-options-525564" exists ...
	I1205 20:40:59.683878   42693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:59.683897   42693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:59.684386   42693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:59.684416   42693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:59.699674   42693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I1205 20:40:59.700113   42693 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:59.700242   42693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44821
	I1205 20:40:59.700623   42693 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:59.700740   42693 main.go:141] libmachine: Using API Version  1
	I1205 20:40:59.700763   42693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:59.701092   42693 main.go:141] libmachine: Using API Version  1
	I1205 20:40:59.701102   42693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:59.701146   42693 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:59.701321   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetState
	I1205 20:40:59.701423   42693 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:59.701984   42693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:40:59.702027   42693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:40:59.703326   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:59.706522   42693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:40:59.709204   42693 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:40:59.709211   42693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:40:59.709225   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHHostname
	I1205 20:40:59.712425   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:59.712888   42693 main.go:141] libmachine: (cert-options-525564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:b7:21", ip: ""} in network mk-cert-options-525564: {Iface:virbr1 ExpiryTime:2023-12-05 21:40:28 +0000 UTC Type:0 Mac:52:54:00:ab:b7:21 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:cert-options-525564 Clientid:01:52:54:00:ab:b7:21}
	I1205 20:40:59.712919   42693 main.go:141] libmachine: (cert-options-525564) DBG | domain cert-options-525564 has defined IP address 192.168.39.95 and MAC address 52:54:00:ab:b7:21 in network mk-cert-options-525564
	I1205 20:40:59.713042   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHPort
	I1205 20:40:59.713241   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHKeyPath
	I1205 20:40:59.713396   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetSSHUsername
	I1205 20:40:59.713560   42693 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/cert-options-525564/id_rsa Username:docker}
	I1205 20:40:59.717861   42693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
	I1205 20:40:59.718227   42693 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:40:59.718708   42693 main.go:141] libmachine: Using API Version  1
	I1205 20:40:59.718717   42693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:40:59.719034   42693 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:40:59.719179   42693 main.go:141] libmachine: (cert-options-525564) Calling .GetState
	I1205 20:40:59.720438   42693 main.go:141] libmachine: (cert-options-525564) Calling .DriverName
	I1205 20:40:59.720638   42693 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-options-525564" context rescaled to 1 replicas
	I1205 20:40:59.720664   42693 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.95 Port:8555 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:40:59.724086   42693 out.go:177] * Verifying Kubernetes components...
	I1205 20:40:57.261509   42248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:40:57.390723   42248 node_ready.go:35] waiting up to 6m0s for node "pause-405510" to be "Ready" ...
	I1205 20:40:57.390741   42248 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:40:57.398012   42248 node_ready.go:49] node "pause-405510" has status "Ready":"True"
	I1205 20:40:57.398040   42248 node_ready.go:38] duration metric: took 7.280993ms waiting for node "pause-405510" to be "Ready" ...
	I1205 20:40:57.398056   42248 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:40:57.600649   42248 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9pnrl" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.998565   42248 pod_ready.go:92] pod "coredns-5dd5756b68-9pnrl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:57.998595   42248 pod_ready.go:81] duration metric: took 397.919621ms waiting for pod "coredns-5dd5756b68-9pnrl" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:57.998615   42248 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:58.398385   42248 pod_ready.go:92] pod "etcd-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:58.398415   42248 pod_ready.go:81] duration metric: took 399.790863ms waiting for pod "etcd-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:58.398428   42248 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:58.798520   42248 pod_ready.go:92] pod "kube-apiserver-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:58.798545   42248 pod_ready.go:81] duration metric: took 400.110118ms waiting for pod "kube-apiserver-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:58.798555   42248 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:59.197924   42248 pod_ready.go:92] pod "kube-controller-manager-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:59.197948   42248 pod_ready.go:81] duration metric: took 399.387075ms waiting for pod "kube-controller-manager-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:59.197957   42248 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kc59g" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:59.598245   42248 pod_ready.go:92] pod "kube-proxy-kc59g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:59.598285   42248 pod_ready.go:81] duration metric: took 400.321546ms waiting for pod "kube-proxy-kc59g" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:59.598298   42248 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:59.999613   42248 pod_ready.go:92] pod "kube-scheduler-pause-405510" in "kube-system" namespace has status "Ready":"True"
	I1205 20:40:59.999643   42248 pod_ready.go:81] duration metric: took 401.336841ms waiting for pod "kube-scheduler-pause-405510" in "kube-system" namespace to be "Ready" ...
	I1205 20:40:59.999654   42248 pod_ready.go:38] duration metric: took 2.60158697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:40:59.999670   42248 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:40:59.999724   42248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:41:00.013689   42248 api_server.go:72] duration metric: took 2.755145152s to wait for apiserver process to appear ...
	I1205 20:41:00.013722   42248 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:41:00.013739   42248 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I1205 20:41:00.019400   42248 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I1205 20:41:00.020993   42248 api_server.go:141] control plane version: v1.28.4
	I1205 20:41:00.021016   42248 api_server.go:131] duration metric: took 7.286644ms to wait for apiserver health ...
	I1205 20:41:00.021026   42248 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:41:00.202801   42248 system_pods.go:59] 6 kube-system pods found
	I1205 20:41:00.202839   42248 system_pods.go:61] "coredns-5dd5756b68-9pnrl" [97a39919-30dd-4eac-ba0e-84bf38fc72eb] Running
	I1205 20:41:00.202848   42248 system_pods.go:61] "etcd-pause-405510" [878e029c-7fbe-478c-91d7-aad746be989c] Running
	I1205 20:41:00.202856   42248 system_pods.go:61] "kube-apiserver-pause-405510" [7c5a406d-f71a-4fff-82c8-1db31277d759] Running
	I1205 20:41:00.202864   42248 system_pods.go:61] "kube-controller-manager-pause-405510" [b1ff8f7e-473e-4529-94e0-76cf4c6f43c0] Running
	I1205 20:41:00.202871   42248 system_pods.go:61] "kube-proxy-kc59g" [797c4268-91d2-4278-8c00-319257a312cf] Running
	I1205 20:41:00.202879   42248 system_pods.go:61] "kube-scheduler-pause-405510" [0f362753-b2d5-4dec-ab2a-44cf2efa93f7] Running
	I1205 20:41:00.202887   42248 system_pods.go:74] duration metric: took 181.854347ms to wait for pod list to return data ...
	I1205 20:41:00.202908   42248 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:41:00.398067   42248 default_sa.go:45] found service account: "default"
	I1205 20:41:00.398115   42248 default_sa.go:55] duration metric: took 195.194284ms for default service account to be created ...
	I1205 20:41:00.398127   42248 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:41:00.603052   42248 system_pods.go:86] 6 kube-system pods found
	I1205 20:41:00.603086   42248 system_pods.go:89] "coredns-5dd5756b68-9pnrl" [97a39919-30dd-4eac-ba0e-84bf38fc72eb] Running
	I1205 20:41:00.603093   42248 system_pods.go:89] "etcd-pause-405510" [878e029c-7fbe-478c-91d7-aad746be989c] Running
	I1205 20:41:00.603100   42248 system_pods.go:89] "kube-apiserver-pause-405510" [7c5a406d-f71a-4fff-82c8-1db31277d759] Running
	I1205 20:41:00.603107   42248 system_pods.go:89] "kube-controller-manager-pause-405510" [b1ff8f7e-473e-4529-94e0-76cf4c6f43c0] Running
	I1205 20:41:00.603112   42248 system_pods.go:89] "kube-proxy-kc59g" [797c4268-91d2-4278-8c00-319257a312cf] Running
	I1205 20:41:00.603119   42248 system_pods.go:89] "kube-scheduler-pause-405510" [0f362753-b2d5-4dec-ab2a-44cf2efa93f7] Running
	I1205 20:41:00.603126   42248 system_pods.go:126] duration metric: took 204.993704ms to wait for k8s-apps to be running ...
	I1205 20:41:00.603136   42248 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:41:00.603185   42248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:41:00.624012   42248 system_svc.go:56] duration metric: took 20.866346ms WaitForService to wait for kubelet.
	I1205 20:41:00.624045   42248 kubeadm.go:581] duration metric: took 3.365507111s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:41:00.624068   42248 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:41:00.799407   42248 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:41:00.799446   42248 node_conditions.go:123] node cpu capacity is 2
	I1205 20:41:00.799459   42248 node_conditions.go:105] duration metric: took 175.384354ms to run NodePressure ...
	I1205 20:41:00.799473   42248 start.go:228] waiting for startup goroutines ...
	I1205 20:41:00.799482   42248 start.go:233] waiting for cluster config update ...
	I1205 20:41:00.799491   42248 start.go:242] writing updated cluster config ...
	I1205 20:41:00.812888   42248 ssh_runner.go:195] Run: rm -f paused
	I1205 20:41:00.883654   42248 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:41:00.885912   42248 out.go:177] * Done! kubectl is now configured to use "pause-405510" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:38:26 UTC, ends at Tue 2023-12-05 20:41:01 UTC. --
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.716151671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808861716129577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=c956e2f7-ce83-4f26-8393-02076332ca1e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.719171041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b105d28-fc13-4c8e-986e-816e33bb1a0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.719278389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b105d28-fc13-4c8e-986e-816e33bb1a0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.719803280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b105d28-fc13-4c8e-986e-816e33bb1a0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.775660657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=989d6451-b703-4500-9064-a0401cbb403f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.775768227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=989d6451-b703-4500-9064-a0401cbb403f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.777643693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=463f7a1b-2620-4a22-9bb4-2ef0335af824 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.778340829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808861778321765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=463f7a1b-2620-4a22-9bb4-2ef0335af824 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.779431917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=20c25951-ab1a-4590-a5bf-0bcd0bab31ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.779532969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=20c25951-ab1a-4590-a5bf-0bcd0bab31ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.779886490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=20c25951-ab1a-4590-a5bf-0bcd0bab31ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.841286333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=60635990-3a18-4ba8-ac9c-e9e7a36070f7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.841344868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=60635990-3a18-4ba8-ac9c-e9e7a36070f7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.843418605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=91a6b97e-999e-49d9-a3e2-99f08e9e199a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.843839797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808861843761076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=91a6b97e-999e-49d9-a3e2-99f08e9e199a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.844437378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1e15dd12-fc46-415d-846b-c3e91f2cfe58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.844484160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1e15dd12-fc46-415d-846b-c3e91f2cfe58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.844729481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1e15dd12-fc46-415d-846b-c3e91f2cfe58 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.903096198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1b97edcb-1e02-4273-9bcb-99fbda057077 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.903163892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1b97edcb-1e02-4273-9bcb-99fbda057077 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.904195258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2c658a7b-67c0-43c6-b968-5e90fb477d85 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.904540461Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808861904525405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=2c658a7b-67c0-43c6-b968-5e90fb477d85 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.905056279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b9d7e47b-81e6-4cdb-92a9-c8b205db3793 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.905107107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9d7e47b-81e6-4cdb-92a9-c8b205db3793 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:01 pause-405510 crio[2465]: time="2023-12-05 20:41:01.905336931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9d7e47b-81e6-4cdb-92a9-c8b205db3793 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a38158f36b783       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   19 seconds ago      Running             kube-proxy                2                   c308da359a3a8       kube-proxy-kc59g
	cbac2bfcf7ea4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago      Running             coredns                   2                   8c1a5bbbc2195       coredns-5dd5756b68-9pnrl
	cdfffea78b398       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   26 seconds ago      Running             kube-apiserver            2                   c7f7de2536c28       kube-apiserver-pause-405510
	0ae1030901ed6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   26 seconds ago      Running             kube-controller-manager   2                   677938363e431       kube-controller-manager-pause-405510
	b7813537c1c55       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   26 seconds ago      Running             etcd                      2                   ce03864ac751a       etcd-pause-405510
	d1dc791b37ec7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   26 seconds ago      Running             kube-scheduler            2                   fd4fe87ddac44       kube-scheduler-pause-405510
	c7807fa1e05c8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   34 seconds ago      Exited              kube-proxy                1                   c308da359a3a8       kube-proxy-kc59g
	88c9bd4032207       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago      Exited              coredns                   1                   8c1a5bbbc2195       coredns-5dd5756b68-9pnrl
	a32e80a05fcba       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   43 seconds ago      Exited              kube-scheduler            1                   fd4fe87ddac44       kube-scheduler-pause-405510
	47d09874f4291       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   47 seconds ago      Exited              etcd                      1                   2c8f69a414267       etcd-pause-405510
	be2901c647efd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   48 seconds ago      Exited              kube-controller-manager   1                   4abdb8355a631       kube-controller-manager-pause-405510
	e0621a060b3f1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   48 seconds ago      Exited              kube-apiserver            1                   a8768c7437a5a       kube-apiserver-pause-405510
	
	* 
	* ==> coredns [88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50730 - 54808 "HINFO IN 6673357796809671742.2682621850958098627. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009636218s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47220 - 40634 "HINFO IN 2169709015068101599.196534304026670715. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009494989s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-405510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-405510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=pause-405510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_39_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:38:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-405510
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:40:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:38:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:38:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:38:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:39:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.159
	  Hostname:    pause-405510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c507931aeb84d0a868748ce96a3397d
	  System UUID:                1c507931-aeb8-4d0a-8687-48ce96a3397d
	  Boot ID:                    41174356-b7eb-4e2d-98cd-1226e0717822
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-9pnrl                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     106s
	  kube-system                 etcd-pause-405510                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-405510             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-pause-405510    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-kc59g                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-pause-405510             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeAllocatableEnforced  119s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node pause-405510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node pause-405510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node pause-405510 status is now: NodeHasSufficientPID
	  Normal  NodeReady                119s               kubelet          Node pause-405510 status is now: NodeReady
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node pause-405510 event: Registered Node pause-405510 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-405510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-405510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-405510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-405510 event: Registered Node pause-405510 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.608540] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.759070] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143991] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.003362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.673797] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.129153] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.161032] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.124096] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.296048] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +11.737951] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[Dec 5 20:39] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[ +54.536873] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 5 20:40] systemd-fstab-generator[2212]: Ignoring "noauto" for root device
	[  +0.307541] systemd-fstab-generator[2230]: Ignoring "noauto" for root device
	[  +0.419583] systemd-fstab-generator[2272]: Ignoring "noauto" for root device
	[  +0.390310] systemd-fstab-generator[2313]: Ignoring "noauto" for root device
	[  +0.497669] systemd-fstab-generator[2343]: Ignoring "noauto" for root device
	[ +20.498261] systemd-fstab-generator[3213]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d] <==
	* 
	* 
	* ==> etcd [b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534] <==
	* {"level":"warn","ts":"2023-12-05T20:40:43.978481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.016093ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14017608028276443491 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-kc59g\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" value_size:4460 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T20:40:43.978607Z","caller":"traceutil/trace.go:171","msg":"trace[2014423653] linearizableReadLoop","detail":"{readStateIndex:473; appliedIndex:472; }","duration":"434.141671ms","start":"2023-12-05T20:40:43.544452Z","end":"2023-12-05T20:40:43.978593Z","steps":["trace[2014423653] 'read index received'  (duration: 172.236351ms)","trace[2014423653] 'applied index is now lower than readState.Index'  (duration: 261.903591ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T20:40:43.978682Z","caller":"traceutil/trace.go:171","msg":"trace[1132292377] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"442.291784ms","start":"2023-12-05T20:40:43.536381Z","end":"2023-12-05T20:40:43.978672Z","steps":["trace[1132292377] 'process raft request'  (duration: 180.360528ms)","trace[1132292377] 'compare'  (duration: 260.862561ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T20:40:43.978751Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:43.536355Z","time spent":"442.347782ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4511,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-kc59g\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" value_size:4460 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" > >"}
	{"level":"warn","ts":"2023-12-05T20:40:43.978943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.511064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2023-12-05T20:40:43.981049Z","caller":"traceutil/trace.go:171","msg":"trace[1850116939] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:444; }","duration":"436.49328ms","start":"2023-12-05T20:40:43.544421Z","end":"2023-12-05T20:40:43.980915Z","steps":["trace[1850116939] 'agreement among raft nodes before linearized reading'  (duration: 434.477817ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:43.981115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:43.544404Z","time spent":"436.691268ms","remote":"127.0.0.1:46710","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5450,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2023-12-05T20:40:43.98138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.085975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-12-05T20:40:43.981445Z","caller":"traceutil/trace.go:171","msg":"trace[2010443615] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:444; }","duration":"192.153064ms","start":"2023-12-05T20:40:43.789281Z","end":"2023-12-05T20:40:43.981434Z","steps":["trace[2010443615] 'agreement among raft nodes before linearized reading'  (duration: 192.050021ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:44.329793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.229561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/\" range_end:\"/registry/deployments/kube-system0\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-12-05T20:40:44.3299Z","caller":"traceutil/trace.go:171","msg":"trace[913754849] range","detail":"{range_begin:/registry/deployments/kube-system/; range_end:/registry/deployments/kube-system0; response_count:1; response_revision:444; }","duration":"223.357831ms","start":"2023-12-05T20:40:44.106522Z","end":"2023-12-05T20:40:44.32988Z","steps":["trace[913754849] 'range keys from in-memory index tree'  (duration: 223.134885ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:44.329802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.585361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" ","response":"range_response_count:1 size:201"}
	{"level":"info","ts":"2023-12-05T20:40:44.330157Z","caller":"traceutil/trace.go:171","msg":"trace[1796461597] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/cronjob-controller; range_end:; response_count:1; response_revision:444; }","duration":"245.933904ms","start":"2023-12-05T20:40:44.084209Z","end":"2023-12-05T20:40:44.330143Z","steps":["trace[1796461597] 'range keys from in-memory index tree'  (duration: 245.475991ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:44.613681Z","caller":"traceutil/trace.go:171","msg":"trace[1109098757] linearizableReadLoop","detail":"{readStateIndex:475; appliedIndex:474; }","duration":"202.597161ms","start":"2023-12-05T20:40:44.41107Z","end":"2023-12-05T20:40:44.613667Z","steps":["trace[1109098757] 'read index received'  (duration: 202.440922ms)","trace[1109098757] 'applied index is now lower than readState.Index'  (duration: 155.416µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T20:40:44.613773Z","caller":"traceutil/trace.go:171","msg":"trace[1364583562] transaction","detail":"{read_only:false; number_of_response:0; response_revision:444; }","duration":"204.519879ms","start":"2023-12-05T20:40:44.409249Z","end":"2023-12-05T20:40:44.613769Z","steps":["trace[1364583562] 'process raft request'  (duration: 204.305525ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:44.614088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.985227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"warn","ts":"2023-12-05T20:40:44.614163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.104565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2023-12-05T20:40:44.614223Z","caller":"traceutil/trace.go:171","msg":"trace[128975514] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:444; }","duration":"203.166368ms","start":"2023-12-05T20:40:44.411047Z","end":"2023-12-05T20:40:44.614214Z","steps":["trace[128975514] 'agreement among raft nodes before linearized reading'  (duration: 203.079767ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:44.61419Z","caller":"traceutil/trace.go:171","msg":"trace[1421691841] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:444; }","duration":"203.109941ms","start":"2023-12-05T20:40:44.411068Z","end":"2023-12-05T20:40:44.614178Z","steps":["trace[1421691841] 'agreement among raft nodes before linearized reading'  (duration: 202.85108ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:46.165823Z","caller":"traceutil/trace.go:171","msg":"trace[477641190] linearizableReadLoop","detail":"{readStateIndex:486; appliedIndex:485; }","duration":"310.311473ms","start":"2023-12-05T20:40:45.855492Z","end":"2023-12-05T20:40:46.165804Z","steps":["trace[477641190] 'read index received'  (duration: 310.102444ms)","trace[477641190] 'applied index is now lower than readState.Index'  (duration: 208.231µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T20:40:46.166188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.692766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" ","response":"range_response_count:1 size:4671"}
	{"level":"info","ts":"2023-12-05T20:40:46.166271Z","caller":"traceutil/trace.go:171","msg":"trace[1486337147] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-9pnrl; range_end:; response_count:1; response_revision:447; }","duration":"310.7922ms","start":"2023-12-05T20:40:45.855469Z","end":"2023-12-05T20:40:46.166261Z","steps":["trace[1486337147] 'agreement among raft nodes before linearized reading'  (duration: 310.490217ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:46.166402Z","caller":"traceutil/trace.go:171","msg":"trace[1571552869] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"346.469536ms","start":"2023-12-05T20:40:45.81992Z","end":"2023-12-05T20:40:46.166389Z","steps":["trace[1571552869] 'process raft request'  (duration: 345.74185ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:46.166493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:45.855453Z","time spent":"310.968218ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4694,"request content":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" "}
	{"level":"warn","ts":"2023-12-05T20:40:46.166506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:45.819904Z","time spent":"346.554368ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4656,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" mod_revision:443 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" value_size:4597 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" > >"}
	
	* 
	* ==> kernel <==
	*  20:41:02 up 2 min,  0 users,  load average: 1.77, 0.76, 0.29
	Linux pause-405510 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143] <==
	* I1205 20:40:41.672352       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1205 20:40:41.757587       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 20:40:41.757739       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:40:41.855126       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:40:41.863615       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 20:40:41.874130       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 20:40:41.874219       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 20:40:41.874842       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 20:40:41.883365       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 20:40:41.876797       1 shared_informer.go:318] Caches are synced for configmaps
	I1205 20:40:41.883588       1 aggregator.go:166] initial CRD sync complete...
	I1205 20:40:41.883621       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 20:40:41.883643       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:40:41.883665       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:40:41.876810       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1205 20:40:41.918539       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 20:40:41.951646       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 20:40:42.676449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:40:44.624587       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 20:40:44.676581       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 20:40:44.733483       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 20:40:44.786257       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:40:44.800310       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:40:54.931473       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 20:40:55.000723       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6] <==
	* 
	* 
	* ==> kube-controller-manager [0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9] <==
	* I1205 20:40:54.944731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.113µs"
	I1205 20:40:54.945067       1 shared_informer.go:318] Caches are synced for taint
	I1205 20:40:54.945220       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1205 20:40:54.945354       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-405510"
	I1205 20:40:54.945449       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1205 20:40:54.945509       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1205 20:40:54.945553       1 taint_manager.go:210] "Sending events to api server"
	I1205 20:40:54.946120       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1205 20:40:54.946309       1 event.go:307] "Event occurred" object="pause-405510" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-405510 event: Registered Node pause-405510 in Controller"
	I1205 20:40:54.946500       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1205 20:40:54.947927       1 shared_informer.go:318] Caches are synced for crt configmap
	I1205 20:40:54.951195       1 shared_informer.go:318] Caches are synced for persistent volume
	I1205 20:40:54.951923       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1205 20:40:54.953780       1 shared_informer.go:318] Caches are synced for expand
	I1205 20:40:54.959345       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1205 20:40:54.968349       1 shared_informer.go:318] Caches are synced for PV protection
	I1205 20:40:54.974780       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1205 20:40:55.089279       1 shared_informer.go:318] Caches are synced for stateful set
	I1205 20:40:55.092700       1 shared_informer.go:318] Caches are synced for disruption
	I1205 20:40:55.099236       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1205 20:40:55.128749       1 shared_informer.go:318] Caches are synced for resource quota
	I1205 20:40:55.159094       1 shared_informer.go:318] Caches are synced for resource quota
	I1205 20:40:55.508890       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 20:40:55.508957       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1205 20:40:55.520307       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01] <==
	* 
	* 
	* ==> kube-proxy [a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928] <==
	* I1205 20:40:42.663360       1 server_others.go:69] "Using iptables proxy"
	I1205 20:40:42.684757       1 node.go:141] Successfully retrieved node IP: 192.168.72.159
	I1205 20:40:42.767145       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:40:42.767234       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:40:42.776762       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:40:42.776877       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:40:42.777237       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:40:42.777282       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:40:42.779842       1 config.go:188] "Starting service config controller"
	I1205 20:40:42.779894       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:40:42.779929       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:40:42.779933       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:40:42.780516       1 config.go:315] "Starting node config controller"
	I1205 20:40:42.780563       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:40:42.880064       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:40:42.880161       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:40:42.880625       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c] <==
	* I1205 20:40:27.834790       1 server_others.go:69] "Using iptables proxy"
	E1205 20:40:27.841154       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-405510": dial tcp 192.168.72.159:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223] <==
	* E1205 20:40:29.050539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.72.159:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.058358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.058424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.609099       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.72.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.609211       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.72.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.704768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.704875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.725540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.725632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.889598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.72.159:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.889708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.72.159:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.897508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.72.159:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.897578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.72.159:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:30.265416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:30.265540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:30.355754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:30.355832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:30.542106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:30.542184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:31.266663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.72.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:31.266778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:33.327959       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1205 20:40:33.328683       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1205 20:40:33.328803       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1205 20:40:33.329079       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a] <==
	* I1205 20:40:38.286663       1 serving.go:348] Generated self-signed cert in-memory
	W1205 20:40:41.823104       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:40:41.823174       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:40:41.823237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:40:41.823251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:40:41.884635       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1205 20:40:41.884732       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:40:41.889625       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:40:41.889764       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:40:41.889781       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:40:41.889794       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 20:40:41.991299       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:38:26 UTC, ends at Tue 2023-12-05 20:41:02 UTC. --
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.074361    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.074419    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.324803    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-405510&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.324892    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-405510&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.398773    3219 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-405510?timeout=10s\": dial tcp 192.168.72.159:8443: connect: connection refused" interval="1.6s"
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.422353    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.422405    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: I1205 20:40:36.509091    3219 kubelet_node_status.go:70] "Attempting to register node" node="pause-405510"
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.509449    3219 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.159:8443: connect: connection refused" node="pause-405510"
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.540945    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.541071    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:38 pause-405510 kubelet[3219]: I1205 20:40:38.111705    3219 kubelet_node_status.go:70] "Attempting to register node" node="pause-405510"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.898224    3219 kubelet_node_status.go:108] "Node was previously registered" node="pause-405510"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.898389    3219 kubelet_node_status.go:73] "Successfully registered node" node="pause-405510"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.900656    3219 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.901758    3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.976081    3219 apiserver.go:52] "Watching apiserver"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.985606    3219 topology_manager.go:215] "Topology Admit Handler" podUID="797c4268-91d2-4278-8c00-319257a312cf" podNamespace="kube-system" podName="kube-proxy-kc59g"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.985781    3219 topology_manager.go:215] "Topology Admit Handler" podUID="97a39919-30dd-4eac-ba0e-84bf38fc72eb" podNamespace="kube-system" podName="coredns-5dd5756b68-9pnrl"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.993478    3219 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.029312    3219 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/797c4268-91d2-4278-8c00-319257a312cf-lib-modules\") pod \"kube-proxy-kc59g\" (UID: \"797c4268-91d2-4278-8c00-319257a312cf\") " pod="kube-system/kube-proxy-kc59g"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.029448    3219 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/797c4268-91d2-4278-8c00-319257a312cf-xtables-lock\") pod \"kube-proxy-kc59g\" (UID: \"797c4268-91d2-4278-8c00-319257a312cf\") " pod="kube-system/kube-proxy-kc59g"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.286362    3219 scope.go:117] "RemoveContainer" containerID="88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.288354    3219 scope.go:117] "RemoveContainer" containerID="c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c"
	Dec 05 20:40:45 pause-405510 kubelet[3219]: I1205 20:40:45.804763    3219 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-405510 -n pause-405510
helpers_test.go:261: (dbg) Run:  kubectl --context pause-405510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-405510 -n pause-405510
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-405510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-405510 logs -n 25: (1.44030101s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo cat                            | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo                                | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo find                           | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-855101 sudo crio                           | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-855101                                     | cilium-855101             | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:39 UTC |
	| start   | -p force-systemd-env-903631                          | force-systemd-env-903631  | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:40 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-405510                                      | pause-405510              | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:41 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-699600 ssh cat                    | force-systemd-flag-699600 | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-699600                         | force-systemd-flag-699600 | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:39 UTC |
	| start   | -p cert-options-525564                               | cert-options-525564       | jenkins | v1.32.0 | 05 Dec 23 20:39 UTC | 05 Dec 23 20:41 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-601680                            | stopped-upgrade-601680    | jenkins | v1.32.0 | 05 Dec 23 20:40 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-903631                          | force-systemd-env-903631  | jenkins | v1.32.0 | 05 Dec 23 20:40 UTC | 05 Dec 23 20:40 UTC |
	| start   | -p cert-expiration-873953                            | cert-expiration-873953    | jenkins | v1.32.0 | 05 Dec 23 20:40 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-525564 ssh                              | cert-options-525564       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-525564 -- sudo                       | cert-options-525564       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-525564                               | cert-options-525564       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p old-k8s-version-061206                            | old-k8s-version-061206    | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:41:03
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:41:03.244728   43585 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:41:03.244903   43585 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:41:03.244919   43585 out.go:309] Setting ErrFile to fd 2...
	I1205 20:41:03.244932   43585 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:41:03.245405   43585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:41:03.246385   43585 out.go:303] Setting JSON to false
	I1205 20:41:03.247552   43585 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5016,"bootTime":1701803847,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:41:03.247643   43585 start.go:138] virtualization: kvm guest
	I1205 20:41:03.252160   43585 out.go:177] * [old-k8s-version-061206] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:41:03.253721   43585 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:41:03.255209   43585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:41:03.253676   43585 notify.go:220] Checking for updates...
	I1205 20:41:03.256925   43585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:41:03.258525   43585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:41:03.260179   43585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:41:03.261688   43585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:41:03.263726   43585 config.go:182] Loaded profile config "cert-expiration-873953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:41:03.263952   43585 config.go:182] Loaded profile config "pause-405510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:41:03.264079   43585 config.go:182] Loaded profile config "stopped-upgrade-601680": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:41:03.264178   43585 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:41:03.309129   43585 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:41:03.310639   43585 start.go:298] selected driver: kvm2
	I1205 20:41:03.310660   43585 start.go:902] validating driver "kvm2" against <nil>
	I1205 20:41:03.310675   43585 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:41:03.311719   43585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:41:03.311828   43585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:41:03.329046   43585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:41:03.329116   43585 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 20:41:03.329409   43585 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:41:03.329476   43585 cni.go:84] Creating CNI manager for ""
	I1205 20:41:03.329487   43585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:41:03.329499   43585 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:41:03.329509   43585 start_flags.go:323] config:
	{Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:41:03.329686   43585 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:41:03.331691   43585 out.go:177] * Starting control plane node old-k8s-version-061206 in cluster old-k8s-version-061206
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:38:26 UTC, ends at Tue 2023-12-05 20:41:04 UTC. --
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.153578834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=86ea0b94-7ea0-4e84-ab57-64dd9155271d name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.154699952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f5caa206-d42b-44fa-b682-37815b454d86 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.155199492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808864155183422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=f5caa206-d42b-44fa-b682-37815b454d86 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.157350738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d21d0206-43f9-4f4c-806e-323050570c9c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.157427280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d21d0206-43f9-4f4c-806e-323050570c9c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.157655472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d21d0206-43f9-4f4c-806e-323050570c9c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.203113762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4afbe7b7-1410-45ca-adf8-acaa29986cc7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.203176918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4afbe7b7-1410-45ca-adf8-acaa29986cc7 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.204245735Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f053e3f4-28f6-4026-aaf6-5903af36f8cd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.204717303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808864204703580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=f053e3f4-28f6-4026-aaf6-5903af36f8cd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.205284342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a23296c-cb9b-4ab3-8b62-8848cfca2c75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.205365649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a23296c-cb9b-4ab3-8b62-8848cfca2c75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.205604045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a23296c-cb9b-4ab3-8b62-8848cfca2c75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.265391256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f1bc1ef4-6a9a-407e-9c12-167a005db629 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.265486466Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f1bc1ef4-6a9a-407e-9c12-167a005db629 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.266931145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=941803c8-62fb-4cc0-b771-32583a9df8b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.267361053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701808864267347960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=941803c8-62fb-4cc0-b771-32583a9df8b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.268492444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=caa783a4-3aa1-405f-91af-06002a4f9ac2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.268631364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=caa783a4-3aa1-405f-91af-06002a4f9ac2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.269090785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=caa783a4-3aa1-405f-91af-06002a4f9ac2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.275963778Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=e1327d75-f861-461b-b959-7a99d7c02a3d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.276329588Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&PodSandboxMetadata{Name:kube-proxy-kc59g,Uid:797c4268-91d2-4278-8c00-319257a312cf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1701808827306577220,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:39:16.960066044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-405510,Uid:5d53dbc59c4348e71f1d9c918130444f,Na
mespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1701808817064943222,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.159:8443,kubernetes.io/config.hash: 5d53dbc59c4348e71f1d9c918130444f,kubernetes.io/config.seen: 2023-12-05T20:39:03.231274550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-9pnrl,Uid:97a39919-30dd-4eac-ba0e-84bf38fc72eb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1701808817022412250,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:39:17.040576288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-405510,Uid:4631673392470210fe784e4bd61a2e42,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1701808816956964864,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4631673392470210fe784e4bd61a2e42,kubernetes.io/config.seen: 2023-12-05T20:39:03.231276555Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:677938363e431c9206fc916
bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-405510,Uid:f42049947cc72c161305b5b4c719579e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1701808816940440146,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f42049947cc72c161305b5b4c719579e,kubernetes.io/config.seen: 2023-12-05T20:39:03.231275704Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&PodSandboxMetadata{Name:etcd-pause-405510,Uid:7ffa15beeb3dc0a8e1de6f49752ccee3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1701808816934600532,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.159:2379,kubernetes.io/config.hash: 7ffa15beeb3dc0a8e1de6f49752ccee3,kubernetes.io/config.seen: 2023-12-05T20:39:03.231270172Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&PodSandboxMetadata{Name:etcd-pause-405510,Uid:7ffa15beeb3dc0a8e1de6f49752ccee3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1701808812246781661,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-url
s: https://192.168.72.159:2379,kubernetes.io/config.hash: 7ffa15beeb3dc0a8e1de6f49752ccee3,kubernetes.io/config.seen: 2023-12-05T20:39:03.231270172Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-405510,Uid:f42049947cc72c161305b5b4c719579e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1701808812241544340,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f42049947cc72c161305b5b4c719579e,kubernetes.io/config.seen: 2023-12-05T20:39:03.231275704Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35
c52ba4dd8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-405510,Uid:5d53dbc59c4348e71f1d9c918130444f,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1701808812106394095,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.159:8443,kubernetes.io/config.hash: 5d53dbc59c4348e71f1d9c918130444f,kubernetes.io/config.seen: 2023-12-05T20:39:03.231274550Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=e1327d75-f861-461b-b959-7a99d7c02a3d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.277457459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f2842260-4db5-4019-a9f7-1231f9c25959 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.277526295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f2842260-4db5-4019-a9f7-1231f9c25959 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:04 pause-405510 crio[2465]: time="2023-12-05 20:41:04.277844501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701808842331926579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701808842356554331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4ead57bd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534,PodSandboxId:ce03864ac751acffa37d2f78005409f5887be23c0e03dac1c30e2df85281817f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701808835686923813,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:
map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143,PodSandboxId:c7f7de2536c28ac3c06ba3bb07c7b24d38c04ba37466224e65ef12df241a5c64,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701808835748322195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 8d1f459a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9,PodSandboxId:677938363e431c9206fc916bb15236fa4738fc1b6e320bba7c8284e688d26137,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701808835714490635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719579e,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701808835664557836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c,PodSandboxId:c308da359a3a83c5e88429b6e7cdd4cbe1b5fe09db5086e852a79ace8d3d9c2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1701808827678800671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kc59g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 797c4268-91d2-4278-8c00-319257a312cf,},Annotations:map[string]string{io.kubernetes.container.hash:
4ead57bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82,PodSandboxId:8c1a5bbbc2195af25ddfd389980d9b3df4e712d13f9a0152cfb69c236d70a0fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1701808818801304710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9pnrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97a39919-30dd-4eac-ba0e-84bf38fc72eb,},Annotations:map[string]string{io.kubernetes.container.hash: df9c665c,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223,PodSandboxId:fd4fe87ddac44ce66c6cf428e7dc69f22a292852a8e7daddf537f40f87eddbbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1701808818608602071,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4631673392470210fe784e4bd61a2e42,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d,PodSandboxId:2c8f69a414267bf04fabfd4bc919c54f0979aaeaf3522f34630d0086fc38eb84,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1701808814268615603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7ffa15beeb3dc0a8e1de6f49752ccee3,},Annotations:map[string]string{io.kubernetes.container.hash: 89d48631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01,PodSandboxId:4abdb8355a631f825b1e3b63009621db35c9a4c3c1f3e97b32add17e9527fe40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1701808813473789550,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42049947cc72c161305b5b4c719
579e,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6,PodSandboxId:a8768c7437a5af695080a77a49e9d207692c0cad19ed76dff8f8d35c52ba4dd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1701808813076473808,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-405510,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d53dbc59c4348e71f1d9c918130444f,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 8d1f459a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f2842260-4db5-4019-a9f7-1231f9c25959 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a38158f36b783       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   22 seconds ago      Running             kube-proxy                2                   c308da359a3a8       kube-proxy-kc59g
	cbac2bfcf7ea4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   22 seconds ago      Running             coredns                   2                   8c1a5bbbc2195       coredns-5dd5756b68-9pnrl
	cdfffea78b398       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   28 seconds ago      Running             kube-apiserver            2                   c7f7de2536c28       kube-apiserver-pause-405510
	0ae1030901ed6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   28 seconds ago      Running             kube-controller-manager   2                   677938363e431       kube-controller-manager-pause-405510
	b7813537c1c55       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   28 seconds ago      Running             etcd                      2                   ce03864ac751a       etcd-pause-405510
	d1dc791b37ec7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   28 seconds ago      Running             kube-scheduler            2                   fd4fe87ddac44       kube-scheduler-pause-405510
	c7807fa1e05c8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   36 seconds ago      Exited              kube-proxy                1                   c308da359a3a8       kube-proxy-kc59g
	88c9bd4032207       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   45 seconds ago      Exited              coredns                   1                   8c1a5bbbc2195       coredns-5dd5756b68-9pnrl
	a32e80a05fcba       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   45 seconds ago      Exited              kube-scheduler            1                   fd4fe87ddac44       kube-scheduler-pause-405510
	47d09874f4291       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   50 seconds ago      Exited              etcd                      1                   2c8f69a414267       etcd-pause-405510
	be2901c647efd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   50 seconds ago      Exited              kube-controller-manager   1                   4abdb8355a631       kube-controller-manager-pause-405510
	e0621a060b3f1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   51 seconds ago      Exited              kube-apiserver            1                   a8768c7437a5a       kube-apiserver-pause-405510
	
	* 
	* ==> coredns [88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50730 - 54808 "HINFO IN 6673357796809671742.2682621850958098627. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009636218s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [cbac2bfcf7ea4db3d3e89fbd72776b6e345f20c42472788381105f507c6d7223] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47220 - 40634 "HINFO IN 2169709015068101599.196534304026670715. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009494989s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-405510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-405510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=pause-405510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_39_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:38:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-405510
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 20:41:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:38:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:38:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:38:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 20:40:41 +0000   Tue, 05 Dec 2023 20:39:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.159
	  Hostname:    pause-405510
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c507931aeb84d0a868748ce96a3397d
	  System UUID:                1c507931-aeb8-4d0a-8687-48ce96a3397d
	  Boot ID:                    41174356-b7eb-4e2d-98cd-1226e0717822
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-9pnrl                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     108s
	  kube-system                 etcd-pause-405510                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-pause-405510             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-controller-manager-pause-405510    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-proxy-kc59g                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-pause-405510             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeAllocatableEnforced  2m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node pause-405510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node pause-405510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node pause-405510 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m1s               kubelet          Node pause-405510 status is now: NodeReady
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node pause-405510 event: Registered Node pause-405510 in Controller
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-405510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-405510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node pause-405510 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-405510 event: Registered Node pause-405510 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068953] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.608540] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.759070] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143991] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.003362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.673797] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.129153] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.161032] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.124096] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.296048] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +11.737951] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[Dec 5 20:39] systemd-fstab-generator[1260]: Ignoring "noauto" for root device
	[ +54.536873] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 5 20:40] systemd-fstab-generator[2212]: Ignoring "noauto" for root device
	[  +0.307541] systemd-fstab-generator[2230]: Ignoring "noauto" for root device
	[  +0.419583] systemd-fstab-generator[2272]: Ignoring "noauto" for root device
	[  +0.390310] systemd-fstab-generator[2313]: Ignoring "noauto" for root device
	[  +0.497669] systemd-fstab-generator[2343]: Ignoring "noauto" for root device
	[ +20.498261] systemd-fstab-generator[3213]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [47d09874f429143f3b5cdf3db8ef2a273d6e0d5df18bb1e4a0abd2dced22047d] <==
	* 
	* 
	* ==> etcd [b7813537c1c554556cdfa66d1f89528fd1c430ac5f2e1621fae73d3ea6b58534] <==
	* {"level":"warn","ts":"2023-12-05T20:40:43.978481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.016093ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14017608028276443491 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-kc59g\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" value_size:4460 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T20:40:43.978607Z","caller":"traceutil/trace.go:171","msg":"trace[2014423653] linearizableReadLoop","detail":"{readStateIndex:473; appliedIndex:472; }","duration":"434.141671ms","start":"2023-12-05T20:40:43.544452Z","end":"2023-12-05T20:40:43.978593Z","steps":["trace[2014423653] 'read index received'  (duration: 172.236351ms)","trace[2014423653] 'applied index is now lower than readState.Index'  (duration: 261.903591ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T20:40:43.978682Z","caller":"traceutil/trace.go:171","msg":"trace[1132292377] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"442.291784ms","start":"2023-12-05T20:40:43.536381Z","end":"2023-12-05T20:40:43.978672Z","steps":["trace[1132292377] 'process raft request'  (duration: 180.360528ms)","trace[1132292377] 'compare'  (duration: 260.862561ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T20:40:43.978751Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:43.536355Z","time spent":"442.347782ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4511,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-kc59g\" mod_revision:407 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" value_size:4460 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-kc59g\" > >"}
	{"level":"warn","ts":"2023-12-05T20:40:43.978943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.511064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2023-12-05T20:40:43.981049Z","caller":"traceutil/trace.go:171","msg":"trace[1850116939] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:444; }","duration":"436.49328ms","start":"2023-12-05T20:40:43.544421Z","end":"2023-12-05T20:40:43.980915Z","steps":["trace[1850116939] 'agreement among raft nodes before linearized reading'  (duration: 434.477817ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:43.981115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:43.544404Z","time spent":"436.691268ms","remote":"127.0.0.1:46710","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":5450,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2023-12-05T20:40:43.98138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.085975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-12-05T20:40:43.981445Z","caller":"traceutil/trace.go:171","msg":"trace[2010443615] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:444; }","duration":"192.153064ms","start":"2023-12-05T20:40:43.789281Z","end":"2023-12-05T20:40:43.981434Z","steps":["trace[2010443615] 'agreement among raft nodes before linearized reading'  (duration: 192.050021ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:44.329793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.229561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/\" range_end:\"/registry/deployments/kube-system0\" ","response":"range_response_count:1 size:4133"}
	{"level":"info","ts":"2023-12-05T20:40:44.3299Z","caller":"traceutil/trace.go:171","msg":"trace[913754849] range","detail":"{range_begin:/registry/deployments/kube-system/; range_end:/registry/deployments/kube-system0; response_count:1; response_revision:444; }","duration":"223.357831ms","start":"2023-12-05T20:40:44.106522Z","end":"2023-12-05T20:40:44.32988Z","steps":["trace[913754849] 'range keys from in-memory index tree'  (duration: 223.134885ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:44.329802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.585361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" ","response":"range_response_count:1 size:201"}
	{"level":"info","ts":"2023-12-05T20:40:44.330157Z","caller":"traceutil/trace.go:171","msg":"trace[1796461597] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/cronjob-controller; range_end:; response_count:1; response_revision:444; }","duration":"245.933904ms","start":"2023-12-05T20:40:44.084209Z","end":"2023-12-05T20:40:44.330143Z","steps":["trace[1796461597] 'range keys from in-memory index tree'  (duration: 245.475991ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:44.613681Z","caller":"traceutil/trace.go:171","msg":"trace[1109098757] linearizableReadLoop","detail":"{readStateIndex:475; appliedIndex:474; }","duration":"202.597161ms","start":"2023-12-05T20:40:44.41107Z","end":"2023-12-05T20:40:44.613667Z","steps":["trace[1109098757] 'read index received'  (duration: 202.440922ms)","trace[1109098757] 'applied index is now lower than readState.Index'  (duration: 155.416µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T20:40:44.613773Z","caller":"traceutil/trace.go:171","msg":"trace[1364583562] transaction","detail":"{read_only:false; number_of_response:0; response_revision:444; }","duration":"204.519879ms","start":"2023-12-05T20:40:44.409249Z","end":"2023-12-05T20:40:44.613769Z","steps":["trace[1364583562] 'process raft request'  (duration: 204.305525ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:44.614088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.985227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"warn","ts":"2023-12-05T20:40:44.614163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.104565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2023-12-05T20:40:44.614223Z","caller":"traceutil/trace.go:171","msg":"trace[128975514] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:444; }","duration":"203.166368ms","start":"2023-12-05T20:40:44.411047Z","end":"2023-12-05T20:40:44.614214Z","steps":["trace[128975514] 'agreement among raft nodes before linearized reading'  (duration: 203.079767ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:44.61419Z","caller":"traceutil/trace.go:171","msg":"trace[1421691841] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:444; }","duration":"203.109941ms","start":"2023-12-05T20:40:44.411068Z","end":"2023-12-05T20:40:44.614178Z","steps":["trace[1421691841] 'agreement among raft nodes before linearized reading'  (duration: 202.85108ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:46.165823Z","caller":"traceutil/trace.go:171","msg":"trace[477641190] linearizableReadLoop","detail":"{readStateIndex:486; appliedIndex:485; }","duration":"310.311473ms","start":"2023-12-05T20:40:45.855492Z","end":"2023-12-05T20:40:46.165804Z","steps":["trace[477641190] 'read index received'  (duration: 310.102444ms)","trace[477641190] 'applied index is now lower than readState.Index'  (duration: 208.231µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T20:40:46.166188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.692766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" ","response":"range_response_count:1 size:4671"}
	{"level":"info","ts":"2023-12-05T20:40:46.166271Z","caller":"traceutil/trace.go:171","msg":"trace[1486337147] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-9pnrl; range_end:; response_count:1; response_revision:447; }","duration":"310.7922ms","start":"2023-12-05T20:40:45.855469Z","end":"2023-12-05T20:40:46.166261Z","steps":["trace[1486337147] 'agreement among raft nodes before linearized reading'  (duration: 310.490217ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T20:40:46.166402Z","caller":"traceutil/trace.go:171","msg":"trace[1571552869] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"346.469536ms","start":"2023-12-05T20:40:45.81992Z","end":"2023-12-05T20:40:46.166389Z","steps":["trace[1571552869] 'process raft request'  (duration: 345.74185ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:40:46.166493Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:45.855453Z","time spent":"310.968218ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4694,"request content":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" "}
	{"level":"warn","ts":"2023-12-05T20:40:46.166506Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:40:45.819904Z","time spent":"346.554368ms","remote":"127.0.0.1:46712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4656,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" mod_revision:443 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" value_size:4597 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-5dd5756b68-9pnrl\" > >"}
	
	* 
	* ==> kernel <==
	*  20:41:04 up 2 min,  0 users,  load average: 1.71, 0.76, 0.29
	Linux pause-405510 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cdfffea78b398a3b4442987147567f496fe5ef14ee7a778b9414c82a19cab143] <==
	* I1205 20:40:41.672352       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1205 20:40:41.757587       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 20:40:41.757739       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 20:40:41.855126       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 20:40:41.863615       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 20:40:41.874130       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1205 20:40:41.874219       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1205 20:40:41.874842       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 20:40:41.883365       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1205 20:40:41.876797       1 shared_informer.go:318] Caches are synced for configmaps
	I1205 20:40:41.883588       1 aggregator.go:166] initial CRD sync complete...
	I1205 20:40:41.883621       1 autoregister_controller.go:141] Starting autoregister controller
	I1205 20:40:41.883643       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 20:40:41.883665       1 cache.go:39] Caches are synced for autoregister controller
	I1205 20:40:41.876810       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1205 20:40:41.918539       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 20:40:41.951646       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1205 20:40:42.676449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 20:40:44.624587       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1205 20:40:44.676581       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1205 20:40:44.733483       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1205 20:40:44.786257       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 20:40:44.800310       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 20:40:54.931473       1 controller.go:624] quota admission added evaluator for: endpoints
	I1205 20:40:55.000723       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [e0621a060b3f1381742c40dcc31388e316d8a62dc85c3b205564528dc450d5a6] <==
	* 
	* 
	* ==> kube-controller-manager [0ae1030901ed6a0709c2368459d0167748f597a920d12cc5b1f4d191b26f0ff9] <==
	* I1205 20:40:54.944731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.113µs"
	I1205 20:40:54.945067       1 shared_informer.go:318] Caches are synced for taint
	I1205 20:40:54.945220       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1205 20:40:54.945354       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-405510"
	I1205 20:40:54.945449       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1205 20:40:54.945509       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1205 20:40:54.945553       1 taint_manager.go:210] "Sending events to api server"
	I1205 20:40:54.946120       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1205 20:40:54.946309       1 event.go:307] "Event occurred" object="pause-405510" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-405510 event: Registered Node pause-405510 in Controller"
	I1205 20:40:54.946500       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1205 20:40:54.947927       1 shared_informer.go:318] Caches are synced for crt configmap
	I1205 20:40:54.951195       1 shared_informer.go:318] Caches are synced for persistent volume
	I1205 20:40:54.951923       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1205 20:40:54.953780       1 shared_informer.go:318] Caches are synced for expand
	I1205 20:40:54.959345       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1205 20:40:54.968349       1 shared_informer.go:318] Caches are synced for PV protection
	I1205 20:40:54.974780       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1205 20:40:55.089279       1 shared_informer.go:318] Caches are synced for stateful set
	I1205 20:40:55.092700       1 shared_informer.go:318] Caches are synced for disruption
	I1205 20:40:55.099236       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1205 20:40:55.128749       1 shared_informer.go:318] Caches are synced for resource quota
	I1205 20:40:55.159094       1 shared_informer.go:318] Caches are synced for resource quota
	I1205 20:40:55.508890       1 shared_informer.go:318] Caches are synced for garbage collector
	I1205 20:40:55.508957       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1205 20:40:55.520307       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [be2901c647efd3ef47a9ce80b6d33a85aef44305e820348da5d60ae9b06d4e01] <==
	* 
	* 
	* ==> kube-proxy [a38158f36b7835d797c690cc29770bbfa0b6ceb6a75dc2bbecea58460039c928] <==
	* I1205 20:40:42.663360       1 server_others.go:69] "Using iptables proxy"
	I1205 20:40:42.684757       1 node.go:141] Successfully retrieved node IP: 192.168.72.159
	I1205 20:40:42.767145       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:40:42.767234       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:40:42.776762       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:40:42.776877       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:40:42.777237       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:40:42.777282       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:40:42.779842       1 config.go:188] "Starting service config controller"
	I1205 20:40:42.779894       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:40:42.779929       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:40:42.779933       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:40:42.780516       1 config.go:315] "Starting node config controller"
	I1205 20:40:42.780563       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:40:42.880064       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:40:42.880161       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:40:42.880625       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c] <==
	* I1205 20:40:27.834790       1 server_others.go:69] "Using iptables proxy"
	E1205 20:40:27.841154       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-405510": dial tcp 192.168.72.159:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [a32e80a05fcba16d36e020306c8517286304bb9cb4410704f27bb262f619e223] <==
	* E1205 20:40:29.050539       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.72.159:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.058358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.058424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.609099       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.72.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.609211       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.72.159:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.704768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.704875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.725540       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.725632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.889598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.72.159:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.889708       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.72.159:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:29.897508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.72.159:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:29.897578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.72.159:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:30.265416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:30.265540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.159:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:30.355754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:30.355832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.159:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:30.542106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:30.542184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.159:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	W1205 20:40:31.266663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.72.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:31.266778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.159:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	E1205 20:40:33.327959       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1205 20:40:33.328683       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1205 20:40:33.328803       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1205 20:40:33.329079       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d1dc791b37ec7196f06979bf2756f0a2423b9fa769097e4c1c21d0017159c38a] <==
	* I1205 20:40:38.286663       1 serving.go:348] Generated self-signed cert in-memory
	W1205 20:40:41.823104       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:40:41.823174       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:40:41.823237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:40:41.823251       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:40:41.884635       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1205 20:40:41.884732       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:40:41.889625       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:40:41.889764       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:40:41.889781       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:40:41.889794       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 20:40:41.991299       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:38:26 UTC, ends at Tue 2023-12-05 20:41:05 UTC. --
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.074361    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.074419    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.324803    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-405510&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.324892    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-405510&limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.398773    3219 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-405510?timeout=10s\": dial tcp 192.168.72.159:8443: connect: connection refused" interval="1.6s"
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.422353    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.422405    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: I1205 20:40:36.509091    3219 kubelet_node_status.go:70] "Attempting to register node" node="pause-405510"
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.509449    3219 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.159:8443: connect: connection refused" node="pause-405510"
	Dec 05 20:40:36 pause-405510 kubelet[3219]: W1205 20:40:36.540945    3219 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:36 pause-405510 kubelet[3219]: E1205 20:40:36.541071    3219 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.159:8443: connect: connection refused
	Dec 05 20:40:38 pause-405510 kubelet[3219]: I1205 20:40:38.111705    3219 kubelet_node_status.go:70] "Attempting to register node" node="pause-405510"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.898224    3219 kubelet_node_status.go:108] "Node was previously registered" node="pause-405510"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.898389    3219 kubelet_node_status.go:73] "Successfully registered node" node="pause-405510"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.900656    3219 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.901758    3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.976081    3219 apiserver.go:52] "Watching apiserver"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.985606    3219 topology_manager.go:215] "Topology Admit Handler" podUID="797c4268-91d2-4278-8c00-319257a312cf" podNamespace="kube-system" podName="kube-proxy-kc59g"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.985781    3219 topology_manager.go:215] "Topology Admit Handler" podUID="97a39919-30dd-4eac-ba0e-84bf38fc72eb" podNamespace="kube-system" podName="coredns-5dd5756b68-9pnrl"
	Dec 05 20:40:41 pause-405510 kubelet[3219]: I1205 20:40:41.993478    3219 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.029312    3219 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/797c4268-91d2-4278-8c00-319257a312cf-lib-modules\") pod \"kube-proxy-kc59g\" (UID: \"797c4268-91d2-4278-8c00-319257a312cf\") " pod="kube-system/kube-proxy-kc59g"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.029448    3219 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/797c4268-91d2-4278-8c00-319257a312cf-xtables-lock\") pod \"kube-proxy-kc59g\" (UID: \"797c4268-91d2-4278-8c00-319257a312cf\") " pod="kube-system/kube-proxy-kc59g"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.286362    3219 scope.go:117] "RemoveContainer" containerID="88c9bd403220778ff543c42af0edaad94618ffb391c82a4fa68022aad7557b82"
	Dec 05 20:40:42 pause-405510 kubelet[3219]: I1205 20:40:42.288354    3219 scope.go:117] "RemoveContainer" containerID="c7807fa1e05c8de13b33e8487101737042c78323a9eb99524e5363530ce8737c"
	Dec 05 20:40:45 pause-405510 kubelet[3219]: I1205 20:40:45.804763    3219 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-405510 -n pause-405510
helpers_test.go:261: (dbg) Run:  kubectl --context pause-405510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (105.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-331495 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-331495 --alsologtostderr -v=3: exit status 82 (2m1.735061767s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-331495"  ...
	* Stopping node "embed-certs-331495"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:43:52.881631   45192 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:43:52.881769   45192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:43:52.881779   45192 out.go:309] Setting ErrFile to fd 2...
	I1205 20:43:52.881795   45192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:43:52.882066   45192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:43:52.882510   45192 out.go:303] Setting JSON to false
	I1205 20:43:52.882608   45192 mustload.go:65] Loading cluster: embed-certs-331495
	I1205 20:43:52.883069   45192 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:43:52.883146   45192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/config.json ...
	I1205 20:43:52.883297   45192 mustload.go:65] Loading cluster: embed-certs-331495
	I1205 20:43:52.883424   45192 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:43:52.883463   45192 stop.go:39] StopHost: embed-certs-331495
	I1205 20:43:52.883934   45192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:43:52.883986   45192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:43:52.900219   45192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I1205 20:43:52.900709   45192 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:43:52.901304   45192 main.go:141] libmachine: Using API Version  1
	I1205 20:43:52.901329   45192 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:43:52.901803   45192 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:43:52.904073   45192 out.go:177] * Stopping node "embed-certs-331495"  ...
	I1205 20:43:52.906127   45192 main.go:141] libmachine: Stopping "embed-certs-331495"...
	I1205 20:43:52.906147   45192 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:43:52.908265   45192 main.go:141] libmachine: (embed-certs-331495) Calling .Stop
	I1205 20:43:52.912115   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 0/60
	I1205 20:43:53.913407   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 1/60
	I1205 20:43:54.915765   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 2/60
	I1205 20:43:55.917129   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 3/60
	I1205 20:43:56.918767   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 4/60
	I1205 20:43:57.920751   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 5/60
	I1205 20:43:58.922129   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 6/60
	I1205 20:43:59.923579   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 7/60
	I1205 20:44:00.924972   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 8/60
	I1205 20:44:01.926361   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 9/60
	I1205 20:44:02.927741   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 10/60
	I1205 20:44:03.930045   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 11/60
	I1205 20:44:04.931396   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 12/60
	I1205 20:44:05.932778   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 13/60
	I1205 20:44:06.934216   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 14/60
	I1205 20:44:07.935602   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 15/60
	I1205 20:44:08.937083   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 16/60
	I1205 20:44:09.938792   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 17/60
	I1205 20:44:10.940650   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 18/60
	I1205 20:44:11.942045   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 19/60
	I1205 20:44:12.944166   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 20/60
	I1205 20:44:13.945735   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 21/60
	I1205 20:44:14.947130   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 22/60
	I1205 20:44:15.948823   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 23/60
	I1205 20:44:16.950723   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 24/60
	I1205 20:44:17.953212   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 25/60
	I1205 20:44:18.954487   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 26/60
	I1205 20:44:19.956802   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 27/60
	I1205 20:44:20.958305   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 28/60
	I1205 20:44:21.959516   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 29/60
	I1205 20:44:22.961165   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 30/60
	I1205 20:44:23.962568   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 31/60
	I1205 20:44:24.964832   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 32/60
	I1205 20:44:25.966123   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 33/60
	I1205 20:44:26.967564   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 34/60
	I1205 20:44:27.969173   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 35/60
	I1205 20:44:28.970428   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 36/60
	I1205 20:44:29.971691   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 37/60
	I1205 20:44:30.972995   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 38/60
	I1205 20:44:31.974205   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 39/60
	I1205 20:44:32.976256   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 40/60
	I1205 20:44:33.977492   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 41/60
	I1205 20:44:34.978787   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 42/60
	I1205 20:44:35.980698   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 43/60
	I1205 20:44:36.981967   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 44/60
	I1205 20:44:37.983776   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 45/60
	I1205 20:44:38.985102   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 46/60
	I1205 20:44:39.986351   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 47/60
	I1205 20:44:40.987705   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 48/60
	I1205 20:44:41.988976   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 49/60
	I1205 20:44:42.990548   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 50/60
	I1205 20:44:43.992733   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 51/60
	I1205 20:44:44.994023   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 52/60
	I1205 20:44:45.995914   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 53/60
	I1205 20:44:46.997314   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 54/60
	I1205 20:44:47.999365   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 55/60
	I1205 20:44:49.000868   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 56/60
	I1205 20:44:50.002252   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 57/60
	I1205 20:44:51.003708   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 58/60
	I1205 20:44:52.005307   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 59/60
	I1205 20:44:53.006663   45192 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:44:53.006734   45192 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:44:53.006751   45192 retry.go:31] will retry after 1.163866026s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:44:54.171010   45192 stop.go:39] StopHost: embed-certs-331495
	I1205 20:44:54.171366   45192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:44:54.171406   45192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:44:54.185706   45192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34525
	I1205 20:44:54.186112   45192 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:44:54.186593   45192 main.go:141] libmachine: Using API Version  1
	I1205 20:44:54.186619   45192 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:44:54.186933   45192 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:44:54.189195   45192 out.go:177] * Stopping node "embed-certs-331495"  ...
	I1205 20:44:54.190784   45192 main.go:141] libmachine: Stopping "embed-certs-331495"...
	I1205 20:44:54.190799   45192 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:44:54.192456   45192 main.go:141] libmachine: (embed-certs-331495) Calling .Stop
	I1205 20:44:54.195906   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 0/60
	I1205 20:44:55.197594   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 1/60
	I1205 20:44:56.199398   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 2/60
	I1205 20:44:57.201502   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 3/60
	I1205 20:44:58.203376   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 4/60
	I1205 20:44:59.205108   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 5/60
	I1205 20:45:00.206990   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 6/60
	I1205 20:45:01.208303   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 7/60
	I1205 20:45:02.210339   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 8/60
	I1205 20:45:03.211657   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 9/60
	I1205 20:45:04.213525   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 10/60
	I1205 20:45:05.215014   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 11/60
	I1205 20:45:06.216447   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 12/60
	I1205 20:45:07.217860   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 13/60
	I1205 20:45:08.219271   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 14/60
	I1205 20:45:09.221169   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 15/60
	I1205 20:45:10.222605   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 16/60
	I1205 20:45:11.223901   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 17/60
	I1205 20:45:12.225552   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 18/60
	I1205 20:45:13.226885   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 19/60
	I1205 20:45:14.228679   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 20/60
	I1205 20:45:15.229978   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 21/60
	I1205 20:45:16.231505   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 22/60
	I1205 20:45:17.232798   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 23/60
	I1205 20:45:18.234244   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 24/60
	I1205 20:45:19.236268   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 25/60
	I1205 20:45:20.237867   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 26/60
	I1205 20:45:21.239498   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 27/60
	I1205 20:45:22.241123   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 28/60
	I1205 20:45:23.242516   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 29/60
	I1205 20:45:24.244410   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 30/60
	I1205 20:45:25.245867   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 31/60
	I1205 20:45:26.247292   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 32/60
	I1205 20:45:27.248875   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 33/60
	I1205 20:45:28.506478   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 34/60
	I1205 20:45:29.508286   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 35/60
	I1205 20:45:30.511158   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 36/60
	I1205 20:45:31.512867   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 37/60
	I1205 20:45:32.514365   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 38/60
	I1205 20:45:33.516005   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 39/60
	I1205 20:45:34.517346   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 40/60
	I1205 20:45:35.519078   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 41/60
	I1205 20:45:36.520481   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 42/60
	I1205 20:45:37.521903   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 43/60
	I1205 20:45:38.523686   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 44/60
	I1205 20:45:39.525181   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 45/60
	I1205 20:45:40.527206   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 46/60
	I1205 20:45:41.528820   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 47/60
	I1205 20:45:42.530042   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 48/60
	I1205 20:45:43.531397   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 49/60
	I1205 20:45:44.533117   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 50/60
	I1205 20:45:45.534545   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 51/60
	I1205 20:45:46.536298   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 52/60
	I1205 20:45:47.538254   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 53/60
	I1205 20:45:48.539525   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 54/60
	I1205 20:45:49.541517   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 55/60
	I1205 20:45:50.542432   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 56/60
	I1205 20:45:51.544478   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 57/60
	I1205 20:45:52.545452   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 58/60
	I1205 20:45:53.546419   45192 main.go:141] libmachine: (embed-certs-331495) Waiting for machine to stop 59/60
	I1205 20:45:54.547194   45192 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:45:54.547238   45192 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:45:54.549673   45192 out.go:177] 
	W1205 20:45:54.551285   45192 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:45:54.551299   45192 out.go:239] * 
	* 
	W1205 20:45:54.554140   45192 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:45:54.555501   45192 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-331495 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495: exit status 3 (18.518974423s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:13.074575   46167 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	E1205 20:46:13.074593   46167 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-331495" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-061206 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-061206 --alsologtostderr -v=3: exit status 82 (2m1.001079652s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-061206"  ...
	* Stopping node "old-k8s-version-061206"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:44:16.657291   45359 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:44:16.657445   45359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:44:16.657456   45359 out.go:309] Setting ErrFile to fd 2...
	I1205 20:44:16.657461   45359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:44:16.657637   45359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:44:16.657857   45359 out.go:303] Setting JSON to false
	I1205 20:44:16.657944   45359 mustload.go:65] Loading cluster: old-k8s-version-061206
	I1205 20:44:16.658368   45359 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:44:16.658468   45359 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:44:16.658703   45359 mustload.go:65] Loading cluster: old-k8s-version-061206
	I1205 20:44:16.658878   45359 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:44:16.658933   45359 stop.go:39] StopHost: old-k8s-version-061206
	I1205 20:44:16.659550   45359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:44:16.659620   45359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:44:16.673682   45359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I1205 20:44:16.674114   45359 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:44:16.674703   45359 main.go:141] libmachine: Using API Version  1
	I1205 20:44:16.674726   45359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:44:16.675101   45359 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:44:16.677789   45359 out.go:177] * Stopping node "old-k8s-version-061206"  ...
	I1205 20:44:16.679443   45359 main.go:141] libmachine: Stopping "old-k8s-version-061206"...
	I1205 20:44:16.679458   45359 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:44:16.681150   45359 main.go:141] libmachine: (old-k8s-version-061206) Calling .Stop
	I1205 20:44:16.684903   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 0/60
	I1205 20:44:17.686704   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 1/60
	I1205 20:44:18.688628   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 2/60
	I1205 20:44:19.690332   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 3/60
	I1205 20:44:20.691993   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 4/60
	I1205 20:44:21.694222   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 5/60
	I1205 20:44:22.695867   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 6/60
	I1205 20:44:23.697891   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 7/60
	I1205 20:44:24.699650   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 8/60
	I1205 20:44:25.701043   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 9/60
	I1205 20:44:26.703639   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 10/60
	I1205 20:44:27.705176   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 11/60
	I1205 20:44:28.706727   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 12/60
	I1205 20:44:29.708165   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 13/60
	I1205 20:44:30.709594   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 14/60
	I1205 20:44:31.711646   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 15/60
	I1205 20:44:32.713196   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 16/60
	I1205 20:44:33.714428   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 17/60
	I1205 20:44:34.715913   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 18/60
	I1205 20:44:35.717347   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 19/60
	I1205 20:44:36.719445   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 20/60
	I1205 20:44:37.720856   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 21/60
	I1205 20:44:38.722473   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 22/60
	I1205 20:44:39.723922   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 23/60
	I1205 20:44:40.725305   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 24/60
	I1205 20:44:41.727188   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 25/60
	I1205 20:44:42.728684   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 26/60
	I1205 20:44:43.729904   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 27/60
	I1205 20:44:44.731380   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 28/60
	I1205 20:44:45.733884   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 29/60
	I1205 20:44:46.736012   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 30/60
	I1205 20:44:47.737382   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 31/60
	I1205 20:44:48.739288   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 32/60
	I1205 20:44:49.740646   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 33/60
	I1205 20:44:50.742122   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 34/60
	I1205 20:44:51.744015   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 35/60
	I1205 20:44:52.745462   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 36/60
	I1205 20:44:53.747518   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 37/60
	I1205 20:44:54.748932   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 38/60
	I1205 20:44:55.750474   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 39/60
	I1205 20:44:56.752794   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 40/60
	I1205 20:44:57.753938   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 41/60
	I1205 20:44:58.755143   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 42/60
	I1205 20:44:59.756405   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 43/60
	I1205 20:45:00.757675   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 44/60
	I1205 20:45:01.759579   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 45/60
	I1205 20:45:02.761029   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 46/60
	I1205 20:45:03.762334   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 47/60
	I1205 20:45:04.763694   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 48/60
	I1205 20:45:05.764844   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 49/60
	I1205 20:45:06.766643   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 50/60
	I1205 20:45:07.768708   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 51/60
	I1205 20:45:08.770289   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 52/60
	I1205 20:45:09.771864   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 53/60
	I1205 20:45:10.773906   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 54/60
	I1205 20:45:11.775946   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 55/60
	I1205 20:45:12.777244   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 56/60
	I1205 20:45:13.778600   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 57/60
	I1205 20:45:14.780797   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 58/60
	I1205 20:45:15.782199   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 59/60
	I1205 20:45:16.783606   45359 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:45:16.783676   45359 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:45:16.783693   45359 retry.go:31] will retry after 542.237943ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:45:17.326327   45359 stop.go:39] StopHost: old-k8s-version-061206
	I1205 20:45:17.326669   45359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:45:17.326709   45359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:45:17.341335   45359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44021
	I1205 20:45:17.341724   45359 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:45:17.342132   45359 main.go:141] libmachine: Using API Version  1
	I1205 20:45:17.342154   45359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:45:17.342467   45359 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:45:17.344185   45359 out.go:177] * Stopping node "old-k8s-version-061206"  ...
	I1205 20:45:17.345445   45359 main.go:141] libmachine: Stopping "old-k8s-version-061206"...
	I1205 20:45:17.345469   45359 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:45:17.347054   45359 main.go:141] libmachine: (old-k8s-version-061206) Calling .Stop
	I1205 20:45:17.350442   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 0/60
	I1205 20:45:18.352859   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 1/60
	I1205 20:45:19.354629   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 2/60
	I1205 20:45:20.356618   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 3/60
	I1205 20:45:21.357767   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 4/60
	I1205 20:45:22.359200   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 5/60
	I1205 20:45:23.360325   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 6/60
	I1205 20:45:24.362005   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 7/60
	I1205 20:45:25.363752   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 8/60
	I1205 20:45:26.365067   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 9/60
	I1205 20:45:27.366777   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 10/60
	I1205 20:45:28.506841   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 11/60
	I1205 20:45:29.508501   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 12/60
	I1205 20:45:30.510880   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 13/60
	I1205 20:45:31.512476   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 14/60
	I1205 20:45:32.514145   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 15/60
	I1205 20:45:33.515575   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 16/60
	I1205 20:45:34.516977   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 17/60
	I1205 20:45:35.518324   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 18/60
	I1205 20:45:36.519651   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 19/60
	I1205 20:45:37.521405   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 20/60
	I1205 20:45:38.523123   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 21/60
	I1205 20:45:39.524759   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 22/60
	I1205 20:45:40.526384   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 23/60
	I1205 20:45:41.527726   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 24/60
	I1205 20:45:42.529410   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 25/60
	I1205 20:45:43.531001   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 26/60
	I1205 20:45:44.532513   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 27/60
	I1205 20:45:45.534185   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 28/60
	I1205 20:45:46.535476   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 29/60
	I1205 20:45:47.537310   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 30/60
	I1205 20:45:48.538502   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 31/60
	I1205 20:45:49.539976   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 32/60
	I1205 20:45:50.541263   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 33/60
	I1205 20:45:51.542552   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 34/60
	I1205 20:45:52.543795   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 35/60
	I1205 20:45:53.545000   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 36/60
	I1205 20:45:54.546385   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 37/60
	I1205 20:45:55.547883   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 38/60
	I1205 20:45:56.549338   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 39/60
	I1205 20:45:57.550928   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 40/60
	I1205 20:45:58.553017   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 41/60
	I1205 20:45:59.554422   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 42/60
	I1205 20:46:00.556892   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 43/60
	I1205 20:46:01.558693   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 44/60
	I1205 20:46:02.560096   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 45/60
	I1205 20:46:03.561500   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 46/60
	I1205 20:46:04.563020   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 47/60
	I1205 20:46:05.564951   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 48/60
	I1205 20:46:06.566235   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 49/60
	I1205 20:46:07.567704   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 50/60
	I1205 20:46:08.569180   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 51/60
	I1205 20:46:09.570797   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 52/60
	I1205 20:46:10.572326   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 53/60
	I1205 20:46:11.573877   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 54/60
	I1205 20:46:12.576056   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 55/60
	I1205 20:46:13.577528   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 56/60
	I1205 20:46:14.578886   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 57/60
	I1205 20:46:15.580925   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 58/60
	I1205 20:46:16.582240   45359 main.go:141] libmachine: (old-k8s-version-061206) Waiting for machine to stop 59/60
	I1205 20:46:17.583541   45359 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:46:17.583601   45359 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:46:17.585549   45359 out.go:177] 
	W1205 20:46:17.587049   45359 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:46:17.587070   45359 out.go:239] * 
	* 
	W1205 20:46:17.589912   45359 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:46:17.592038   45359 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-061206 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206: exit status 3 (18.519722436s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:36.114567   46302 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	E1205 20:46:36.114587   46302 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-061206" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-143651 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-143651 --alsologtostderr -v=3: exit status 82 (2m1.435416871s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-143651"  ...
	* Stopping node "no-preload-143651"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:44:33.253640   45519 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:44:33.253899   45519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:44:33.253908   45519 out.go:309] Setting ErrFile to fd 2...
	I1205 20:44:33.253913   45519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:44:33.254106   45519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:44:33.254350   45519 out.go:303] Setting JSON to false
	I1205 20:44:33.254425   45519 mustload.go:65] Loading cluster: no-preload-143651
	I1205 20:44:33.254752   45519 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:44:33.254813   45519 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:44:33.254978   45519 mustload.go:65] Loading cluster: no-preload-143651
	I1205 20:44:33.255076   45519 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:44:33.255109   45519 stop.go:39] StopHost: no-preload-143651
	I1205 20:44:33.255553   45519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:44:33.255602   45519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:44:33.270105   45519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I1205 20:44:33.270563   45519 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:44:33.271127   45519 main.go:141] libmachine: Using API Version  1
	I1205 20:44:33.271156   45519 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:44:33.271522   45519 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:44:33.274068   45519 out.go:177] * Stopping node "no-preload-143651"  ...
	I1205 20:44:33.275542   45519 main.go:141] libmachine: Stopping "no-preload-143651"...
	I1205 20:44:33.275563   45519 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:44:33.277106   45519 main.go:141] libmachine: (no-preload-143651) Calling .Stop
	I1205 20:44:33.280503   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 0/60
	I1205 20:44:34.281845   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 1/60
	I1205 20:44:35.283156   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 2/60
	I1205 20:44:36.284629   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 3/60
	I1205 20:44:37.286013   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 4/60
	I1205 20:44:38.287876   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 5/60
	I1205 20:44:39.289151   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 6/60
	I1205 20:44:40.290574   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 7/60
	I1205 20:44:41.291998   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 8/60
	I1205 20:44:42.293372   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 9/60
	I1205 20:44:43.295591   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 10/60
	I1205 20:44:44.296877   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 11/60
	I1205 20:44:45.298424   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 12/60
	I1205 20:44:46.299833   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 13/60
	I1205 20:44:47.301063   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 14/60
	I1205 20:44:48.302934   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 15/60
	I1205 20:44:49.305050   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 16/60
	I1205 20:44:50.306452   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 17/60
	I1205 20:44:51.307779   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 18/60
	I1205 20:44:52.309775   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 19/60
	I1205 20:44:53.311837   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 20/60
	I1205 20:44:54.313264   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 21/60
	I1205 20:44:55.314654   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 22/60
	I1205 20:44:56.315923   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 23/60
	I1205 20:44:57.317311   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 24/60
	I1205 20:44:58.319641   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 25/60
	I1205 20:44:59.321086   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 26/60
	I1205 20:45:00.322483   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 27/60
	I1205 20:45:01.324811   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 28/60
	I1205 20:45:02.326300   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 29/60
	I1205 20:45:03.328391   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 30/60
	I1205 20:45:04.329724   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 31/60
	I1205 20:45:05.331066   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 32/60
	I1205 20:45:06.332493   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 33/60
	I1205 20:45:07.333857   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 34/60
	I1205 20:45:08.336184   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 35/60
	I1205 20:45:09.337458   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 36/60
	I1205 20:45:10.338740   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 37/60
	I1205 20:45:11.339993   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 38/60
	I1205 20:45:12.341460   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 39/60
	I1205 20:45:13.343665   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 40/60
	I1205 20:45:14.345351   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 41/60
	I1205 20:45:15.346749   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 42/60
	I1205 20:45:16.348870   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 43/60
	I1205 20:45:17.350264   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 44/60
	I1205 20:45:18.352261   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 45/60
	I1205 20:45:19.354016   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 46/60
	I1205 20:45:20.355289   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 47/60
	I1205 20:45:21.356680   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 48/60
	I1205 20:45:22.358117   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 49/60
	I1205 20:45:23.359681   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 50/60
	I1205 20:45:24.361402   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 51/60
	I1205 20:45:25.363015   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 52/60
	I1205 20:45:26.364650   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 53/60
	I1205 20:45:27.366407   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 54/60
	I1205 20:45:28.506498   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 55/60
	I1205 20:45:29.508022   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 56/60
	I1205 20:45:30.510138   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 57/60
	I1205 20:45:31.511784   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 58/60
	I1205 20:45:32.513145   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 59/60
	I1205 20:45:33.513708   45519 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:45:33.513767   45519 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:45:33.513788   45519 retry.go:31] will retry after 992.999193ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:45:34.507431   45519 stop.go:39] StopHost: no-preload-143651
	I1205 20:45:34.507800   45519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:45:34.507856   45519 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:45:34.522513   45519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
	I1205 20:45:34.522932   45519 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:45:34.523405   45519 main.go:141] libmachine: Using API Version  1
	I1205 20:45:34.523430   45519 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:45:34.523707   45519 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:45:34.525885   45519 out.go:177] * Stopping node "no-preload-143651"  ...
	I1205 20:45:34.527487   45519 main.go:141] libmachine: Stopping "no-preload-143651"...
	I1205 20:45:34.527501   45519 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:45:34.529134   45519 main.go:141] libmachine: (no-preload-143651) Calling .Stop
	I1205 20:45:34.532315   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 0/60
	I1205 20:45:35.533510   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 1/60
	I1205 20:45:36.534918   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 2/60
	I1205 20:45:37.536246   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 3/60
	I1205 20:45:38.537522   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 4/60
	I1205 20:45:39.539201   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 5/60
	I1205 20:45:40.540660   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 6/60
	I1205 20:45:41.541749   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 7/60
	I1205 20:45:42.542843   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 8/60
	I1205 20:45:43.544518   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 9/60
	I1205 20:45:44.546325   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 10/60
	I1205 20:45:45.547401   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 11/60
	I1205 20:45:46.548636   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 12/60
	I1205 20:45:47.550251   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 13/60
	I1205 20:45:48.551394   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 14/60
	I1205 20:45:49.553181   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 15/60
	I1205 20:45:50.554483   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 16/60
	I1205 20:45:51.556511   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 17/60
	I1205 20:45:52.557603   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 18/60
	I1205 20:45:53.558763   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 19/60
	I1205 20:45:54.560409   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 20/60
	I1205 20:45:55.561588   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 21/60
	I1205 20:45:56.562932   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 22/60
	I1205 20:45:57.564387   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 23/60
	I1205 20:45:58.565618   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 24/60
	I1205 20:45:59.567429   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 25/60
	I1205 20:46:00.568900   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 26/60
	I1205 20:46:01.570436   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 27/60
	I1205 20:46:02.573021   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 28/60
	I1205 20:46:03.574297   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 29/60
	I1205 20:46:04.575598   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 30/60
	I1205 20:46:05.576962   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 31/60
	I1205 20:46:06.577946   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 32/60
	I1205 20:46:07.579036   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 33/60
	I1205 20:46:08.580992   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 34/60
	I1205 20:46:09.582938   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 35/60
	I1205 20:46:10.584923   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 36/60
	I1205 20:46:11.586185   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 37/60
	I1205 20:46:12.587448   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 38/60
	I1205 20:46:13.589279   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 39/60
	I1205 20:46:14.590985   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 40/60
	I1205 20:46:15.592853   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 41/60
	I1205 20:46:16.594098   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 42/60
	I1205 20:46:17.595529   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 43/60
	I1205 20:46:18.596704   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 44/60
	I1205 20:46:19.598373   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 45/60
	I1205 20:46:20.599907   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 46/60
	I1205 20:46:21.601264   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 47/60
	I1205 20:46:22.602706   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 48/60
	I1205 20:46:23.604033   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 49/60
	I1205 20:46:24.605832   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 50/60
	I1205 20:46:25.607229   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 51/60
	I1205 20:46:26.608660   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 52/60
	I1205 20:46:27.610555   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 53/60
	I1205 20:46:28.611946   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 54/60
	I1205 20:46:29.613788   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 55/60
	I1205 20:46:30.615145   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 56/60
	I1205 20:46:31.616488   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 57/60
	I1205 20:46:32.617822   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 58/60
	I1205 20:46:33.619083   45519 main.go:141] libmachine: (no-preload-143651) Waiting for machine to stop 59/60
	I1205 20:46:34.619950   45519 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:46:34.619993   45519 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:46:34.622138   45519 out.go:177] 
	W1205 20:46:34.623773   45519 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:46:34.623794   45519 out.go:239] * 
	* 
	W1205 20:46:34.628228   45519 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:46:34.629687   45519 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-143651 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651: exit status 3 (18.633522635s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:53.266618   46486 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host
	E1205 20:46:53.266637   46486 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-143651" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495: exit status 3 (3.201910871s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:16.278620   46244 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	E1205 20:46:16.278648   46244 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-331495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-331495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.149765933s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-331495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495: exit status 3 (3.062497626s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:25.490597   46344 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host
	E1205 20:46:25.490619   46344 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.180:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-331495" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206: exit status 3 (3.168317128s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:39.282629   46534 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	E1205 20:46:39.282649   46534 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-061206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-061206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154669232s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-061206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206: exit status 3 (3.061102688s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:48.498657   46659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host
	E1205 20:46:48.498676   46659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.116:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-061206" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-463614 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-463614 --alsologtostderr -v=3: exit status 82 (2m0.924669558s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-463614"  ...
	* Stopping node "default-k8s-diff-port-463614"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:46:44.281049   46642 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:46:44.281213   46642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:46:44.281224   46642 out.go:309] Setting ErrFile to fd 2...
	I1205 20:46:44.281231   46642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:46:44.281449   46642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:46:44.281690   46642 out.go:303] Setting JSON to false
	I1205 20:46:44.281789   46642 mustload.go:65] Loading cluster: default-k8s-diff-port-463614
	I1205 20:46:44.282144   46642 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:46:44.282231   46642 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:46:44.282479   46642 mustload.go:65] Loading cluster: default-k8s-diff-port-463614
	I1205 20:46:44.282617   46642 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:46:44.282655   46642 stop.go:39] StopHost: default-k8s-diff-port-463614
	I1205 20:46:44.283044   46642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:46:44.283100   46642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:46:44.297680   46642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I1205 20:46:44.298105   46642 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:46:44.298680   46642 main.go:141] libmachine: Using API Version  1
	I1205 20:46:44.298703   46642 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:46:44.299003   46642 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:46:44.301350   46642 out.go:177] * Stopping node "default-k8s-diff-port-463614"  ...
	I1205 20:46:44.302639   46642 main.go:141] libmachine: Stopping "default-k8s-diff-port-463614"...
	I1205 20:46:44.302654   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:46:44.304327   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Stop
	I1205 20:46:44.307540   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 0/60
	I1205 20:46:45.308871   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 1/60
	I1205 20:46:46.310426   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 2/60
	I1205 20:46:47.311694   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 3/60
	I1205 20:46:48.313164   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 4/60
	I1205 20:46:49.315342   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 5/60
	I1205 20:46:50.316825   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 6/60
	I1205 20:46:51.318310   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 7/60
	I1205 20:46:52.319696   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 8/60
	I1205 20:46:53.320887   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 9/60
	I1205 20:46:54.322954   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 10/60
	I1205 20:46:55.324518   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 11/60
	I1205 20:46:56.325965   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 12/60
	I1205 20:46:57.327344   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 13/60
	I1205 20:46:58.328674   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 14/60
	I1205 20:46:59.330780   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 15/60
	I1205 20:47:00.332248   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 16/60
	I1205 20:47:01.333649   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 17/60
	I1205 20:47:02.335162   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 18/60
	I1205 20:47:03.336561   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 19/60
	I1205 20:47:04.338944   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 20/60
	I1205 20:47:05.340330   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 21/60
	I1205 20:47:06.341671   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 22/60
	I1205 20:47:07.342976   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 23/60
	I1205 20:47:08.344394   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 24/60
	I1205 20:47:09.346432   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 25/60
	I1205 20:47:10.347944   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 26/60
	I1205 20:47:11.349312   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 27/60
	I1205 20:47:12.350729   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 28/60
	I1205 20:47:13.352074   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 29/60
	I1205 20:47:14.354194   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 30/60
	I1205 20:47:15.355773   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 31/60
	I1205 20:47:16.357027   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 32/60
	I1205 20:47:17.358428   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 33/60
	I1205 20:47:18.360536   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 34/60
	I1205 20:47:19.362981   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 35/60
	I1205 20:47:20.365371   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 36/60
	I1205 20:47:21.366975   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 37/60
	I1205 20:47:22.368471   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 38/60
	I1205 20:47:23.369758   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 39/60
	I1205 20:47:24.371894   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 40/60
	I1205 20:47:25.373601   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 41/60
	I1205 20:47:26.375193   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 42/60
	I1205 20:47:27.376515   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 43/60
	I1205 20:47:28.377711   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 44/60
	I1205 20:47:29.379724   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 45/60
	I1205 20:47:30.381158   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 46/60
	I1205 20:47:31.382665   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 47/60
	I1205 20:47:32.383999   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 48/60
	I1205 20:47:33.385379   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 49/60
	I1205 20:47:34.387490   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 50/60
	I1205 20:47:35.389071   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 51/60
	I1205 20:47:36.390807   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 52/60
	I1205 20:47:37.392347   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 53/60
	I1205 20:47:38.393717   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 54/60
	I1205 20:47:39.395745   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 55/60
	I1205 20:47:40.397451   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 56/60
	I1205 20:47:41.398992   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 57/60
	I1205 20:47:42.400546   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 58/60
	I1205 20:47:43.401832   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 59/60
	I1205 20:47:44.403244   46642 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:47:44.403308   46642 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:47:44.403343   46642 retry.go:31] will retry after 617.009766ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:47:45.021199   46642 stop.go:39] StopHost: default-k8s-diff-port-463614
	I1205 20:47:45.021589   46642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:47:45.021629   46642 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:47:45.035729   46642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I1205 20:47:45.036172   46642 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:47:45.036615   46642 main.go:141] libmachine: Using API Version  1
	I1205 20:47:45.036639   46642 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:47:45.036981   46642 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:47:45.039450   46642 out.go:177] * Stopping node "default-k8s-diff-port-463614"  ...
	I1205 20:47:45.041218   46642 main.go:141] libmachine: Stopping "default-k8s-diff-port-463614"...
	I1205 20:47:45.041234   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:47:45.043053   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Stop
	I1205 20:47:45.046316   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 0/60
	I1205 20:47:46.048103   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 1/60
	I1205 20:47:47.049582   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 2/60
	I1205 20:47:48.051197   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 3/60
	I1205 20:47:49.052805   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 4/60
	I1205 20:47:50.054865   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 5/60
	I1205 20:47:51.056616   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 6/60
	I1205 20:47:52.058433   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 7/60
	I1205 20:47:53.059933   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 8/60
	I1205 20:47:54.061731   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 9/60
	I1205 20:47:55.064145   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 10/60
	I1205 20:47:56.065513   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 11/60
	I1205 20:47:57.067009   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 12/60
	I1205 20:47:58.068482   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 13/60
	I1205 20:47:59.069949   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 14/60
	I1205 20:48:00.072512   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 15/60
	I1205 20:48:01.074205   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 16/60
	I1205 20:48:02.075709   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 17/60
	I1205 20:48:03.077282   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 18/60
	I1205 20:48:04.078948   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 19/60
	I1205 20:48:05.080999   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 20/60
	I1205 20:48:06.082526   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 21/60
	I1205 20:48:07.084048   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 22/60
	I1205 20:48:08.085343   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 23/60
	I1205 20:48:09.086824   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 24/60
	I1205 20:48:10.089008   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 25/60
	I1205 20:48:11.090740   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 26/60
	I1205 20:48:12.091983   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 27/60
	I1205 20:48:13.093516   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 28/60
	I1205 20:48:14.094907   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 29/60
	I1205 20:48:15.096521   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 30/60
	I1205 20:48:16.097877   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 31/60
	I1205 20:48:17.099408   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 32/60
	I1205 20:48:18.100711   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 33/60
	I1205 20:48:19.102127   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 34/60
	I1205 20:48:20.103913   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 35/60
	I1205 20:48:21.105402   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 36/60
	I1205 20:48:22.106949   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 37/60
	I1205 20:48:23.108405   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 38/60
	I1205 20:48:24.109926   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 39/60
	I1205 20:48:25.111941   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 40/60
	I1205 20:48:26.113415   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 41/60
	I1205 20:48:27.114772   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 42/60
	I1205 20:48:28.116261   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 43/60
	I1205 20:48:29.117654   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 44/60
	I1205 20:48:30.119190   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 45/60
	I1205 20:48:31.120795   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 46/60
	I1205 20:48:32.122468   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 47/60
	I1205 20:48:33.124295   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 48/60
	I1205 20:48:34.125642   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 49/60
	I1205 20:48:35.127706   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 50/60
	I1205 20:48:36.129048   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 51/60
	I1205 20:48:37.130646   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 52/60
	I1205 20:48:38.132124   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 53/60
	I1205 20:48:39.133506   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 54/60
	I1205 20:48:40.135420   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 55/60
	I1205 20:48:41.136850   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 56/60
	I1205 20:48:42.138415   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 57/60
	I1205 20:48:43.139813   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 58/60
	I1205 20:48:44.141375   46642 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for machine to stop 59/60
	I1205 20:48:45.142830   46642 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 20:48:45.142876   46642 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:48:45.144496   46642 out.go:177] 
	W1205 20:48:45.145809   46642 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:48:45.145824   46642 out.go:239] * 
	* 
	W1205 20:48:45.148952   46642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:48:45.150506   46642 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-463614 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614: exit status 3 (18.674494734s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:49:03.826544   47185 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E1205 20:49:03.826565   47185 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-463614" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651: exit status 3 (3.168039175s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:46:56.434724   46745 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host
	E1205 20:46:56.434745   46745 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-143651 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-143651 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154420852s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-143651 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651: exit status 3 (3.061004819s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:47:05.650629   46836 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host
	E1205 20:47:05.650649   46836 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-143651" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614: exit status 3 (3.167880462s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:49:06.994618   47266 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E1205 20:49:06.994651   47266 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-463614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-463614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154182908s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-463614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614: exit status 3 (3.061629335s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 20:49:16.210655   47335 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E1205 20:49:16.210678   47335 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-463614" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:05:56.985903732 +0000 UTC m=+5469.798363369
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-463614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-463614 logs -n 25: (1.700768526s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-405510                                        | pause-405510                 | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-601680                              | stopped-upgrade-601680       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:49:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:49:16.268811   47365 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:49:16.269102   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269113   47365 out.go:309] Setting ErrFile to fd 2...
	I1205 20:49:16.269117   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269306   47365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:49:16.269873   47365 out.go:303] Setting JSON to false
	I1205 20:49:16.270847   47365 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5509,"bootTime":1701803847,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:49:16.270909   47365 start.go:138] virtualization: kvm guest
	I1205 20:49:16.273160   47365 out.go:177] * [default-k8s-diff-port-463614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:49:16.275265   47365 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:49:16.275288   47365 notify.go:220] Checking for updates...
	I1205 20:49:16.276797   47365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:49:16.278334   47365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:49:16.279902   47365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:49:16.281580   47365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:49:16.283168   47365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:49:16.285134   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:49:16.285533   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.285605   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.300209   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I1205 20:49:16.300585   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.301134   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.301159   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.301488   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.301644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.301873   47365 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:49:16.302164   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.302215   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.317130   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:49:16.317591   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.318064   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.318086   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.318475   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.318691   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.356580   47365 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:49:16.358350   47365 start.go:298] selected driver: kvm2
	I1205 20:49:16.358368   47365 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.358501   47365 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:49:16.359194   47365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.359276   47365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:49:16.374505   47365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:49:16.374939   47365 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:49:16.374999   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:49:16.375009   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:49:16.375022   47365 start_flags.go:323] config:
	{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-46361
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.375188   47365 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.377202   47365 out.go:177] * Starting control plane node default-k8s-diff-port-463614 in cluster default-k8s-diff-port-463614
	I1205 20:49:16.338499   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:19.410522   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:16.379191   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:49:16.379245   47365 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:49:16.379253   47365 cache.go:56] Caching tarball of preloaded images
	I1205 20:49:16.379352   47365 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:49:16.379364   47365 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:49:16.379500   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:49:16.379715   47365 start.go:365] acquiring machines lock for default-k8s-diff-port-463614: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:49:25.490576   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:28.562621   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:34.642596   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:37.714630   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:43.794573   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:46.866618   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:52.946521   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:56.018552   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:02.098566   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:05.170641   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:11.250570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:14.322507   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:20.402570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:23.474581   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:29.554568   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:32.626541   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:38.706589   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:41.778594   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:47.858626   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:50.930560   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:57.010496   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:00.082587   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:03.086325   46700 start.go:369] acquired machines lock for "old-k8s-version-061206" in 4m14.42699626s
	I1205 20:51:03.086377   46700 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:03.086392   46700 fix.go:54] fixHost starting: 
	I1205 20:51:03.086799   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:03.086835   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:03.101342   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1205 20:51:03.101867   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:03.102378   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:51:03.102403   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:03.102792   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:03.103003   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:03.103208   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:51:03.104894   46700 fix.go:102] recreateIfNeeded on old-k8s-version-061206: state=Stopped err=<nil>
	I1205 20:51:03.104914   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	W1205 20:51:03.105115   46700 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:03.106835   46700 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-061206" ...
	I1205 20:51:03.108621   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Start
	I1205 20:51:03.108840   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring networks are active...
	I1205 20:51:03.109627   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network default is active
	I1205 20:51:03.110007   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network mk-old-k8s-version-061206 is active
	I1205 20:51:03.110401   46700 main.go:141] libmachine: (old-k8s-version-061206) Getting domain xml...
	I1205 20:51:03.111358   46700 main.go:141] libmachine: (old-k8s-version-061206) Creating domain...
	I1205 20:51:03.084237   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:03.084288   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:51:03.086163   46374 machine.go:91] provisioned docker machine in 4m37.408875031s
	I1205 20:51:03.086199   46374 fix.go:56] fixHost completed within 4m37.430079633s
	I1205 20:51:03.086204   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 4m37.430101514s
	W1205 20:51:03.086231   46374 start.go:694] error starting host: provision: host is not running
	W1205 20:51:03.086344   46374 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:51:03.086356   46374 start.go:709] Will try again in 5 seconds ...
	I1205 20:51:04.367947   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting to get IP...
	I1205 20:51:04.368825   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.369277   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.369387   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.369246   47662 retry.go:31] will retry after 251.730796ms: waiting for machine to come up
	I1205 20:51:04.622984   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.623402   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.623431   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.623354   47662 retry.go:31] will retry after 383.862516ms: waiting for machine to come up
	I1205 20:51:05.008944   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.009308   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.009336   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.009237   47662 retry.go:31] will retry after 412.348365ms: waiting for machine to come up
	I1205 20:51:05.422846   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.423235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.423253   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.423198   47662 retry.go:31] will retry after 568.45875ms: waiting for machine to come up
	I1205 20:51:05.992882   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.993236   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.993264   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.993182   47662 retry.go:31] will retry after 494.410091ms: waiting for machine to come up
	I1205 20:51:06.488852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:06.489210   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:06.489235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:06.489151   47662 retry.go:31] will retry after 640.351521ms: waiting for machine to come up
	I1205 20:51:07.130869   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:07.131329   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:07.131355   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:07.131273   47662 retry.go:31] will retry after 1.164209589s: waiting for machine to come up
	I1205 20:51:08.296903   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:08.297333   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:08.297365   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:08.297280   47662 retry.go:31] will retry after 1.479760715s: waiting for machine to come up
	I1205 20:51:08.087457   46374 start.go:365] acquiring machines lock for embed-certs-331495: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:09.778949   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:09.779414   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:09.779435   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:09.779379   47662 retry.go:31] will retry after 1.577524888s: waiting for machine to come up
	I1205 20:51:11.359094   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:11.359468   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:11.359499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:11.359405   47662 retry.go:31] will retry after 1.742003001s: waiting for machine to come up
	I1205 20:51:13.103927   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:13.104416   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:13.104446   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:13.104365   47662 retry.go:31] will retry after 2.671355884s: waiting for machine to come up
	I1205 20:51:15.777050   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:15.777542   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:15.777573   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:15.777491   47662 retry.go:31] will retry after 2.435682478s: waiting for machine to come up
	I1205 20:51:18.214485   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:18.214943   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:18.214965   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:18.214920   47662 retry.go:31] will retry after 2.827460605s: waiting for machine to come up
	I1205 20:51:22.191314   46866 start.go:369] acquired machines lock for "no-preload-143651" in 4m16.377152417s
	I1205 20:51:22.191373   46866 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:22.191380   46866 fix.go:54] fixHost starting: 
	I1205 20:51:22.191764   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:22.191801   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:22.208492   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I1205 20:51:22.208882   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:22.209423   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:51:22.209448   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:22.209839   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:22.210041   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:22.210202   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:51:22.211737   46866 fix.go:102] recreateIfNeeded on no-preload-143651: state=Stopped err=<nil>
	I1205 20:51:22.211762   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	W1205 20:51:22.211960   46866 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:22.214319   46866 out.go:177] * Restarting existing kvm2 VM for "no-preload-143651" ...
	I1205 20:51:21.044392   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044931   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has current primary IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044953   46700 main.go:141] libmachine: (old-k8s-version-061206) Found IP for machine: 192.168.50.116
	I1205 20:51:21.044964   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserving static IP address...
	I1205 20:51:21.045337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.045357   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserved static IP address: 192.168.50.116
	I1205 20:51:21.045371   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | skip adding static IP to network mk-old-k8s-version-061206 - found existing host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"}
	I1205 20:51:21.045381   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting for SSH to be available...
	I1205 20:51:21.045398   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Getting to WaitForSSH function...
	I1205 20:51:21.047343   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047678   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.047719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047758   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH client type: external
	I1205 20:51:21.047789   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa (-rw-------)
	I1205 20:51:21.047817   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:21.047832   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | About to run SSH command:
	I1205 20:51:21.047841   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | exit 0
	I1205 20:51:21.134741   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:21.135100   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetConfigRaw
	I1205 20:51:21.135770   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.138325   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138656   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.138689   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138908   46700 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:51:21.139128   46700 machine.go:88] provisioning docker machine ...
	I1205 20:51:21.139147   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.139351   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139516   46700 buildroot.go:166] provisioning hostname "old-k8s-version-061206"
	I1205 20:51:21.139534   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139714   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.141792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142136   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.142163   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142294   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.142471   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142609   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142741   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.142868   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.143244   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.143264   46700 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061206 && echo "old-k8s-version-061206" | sudo tee /etc/hostname
	I1205 20:51:21.267170   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061206
	
	I1205 20:51:21.267193   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.270042   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270524   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.270556   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270749   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.270945   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271115   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.271407   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.271735   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.271752   46700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061206/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:21.391935   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:21.391959   46700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:21.391983   46700 buildroot.go:174] setting up certificates
	I1205 20:51:21.391994   46700 provision.go:83] configureAuth start
	I1205 20:51:21.392002   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.392264   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.395020   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.395375   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395517   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.397499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397760   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.397792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397937   46700 provision.go:138] copyHostCerts
	I1205 20:51:21.397994   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:21.398007   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:21.398090   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:21.398222   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:21.398234   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:21.398293   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:21.398383   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:21.398394   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:21.398432   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:21.398499   46700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061206 san=[192.168.50.116 192.168.50.116 localhost 127.0.0.1 minikube old-k8s-version-061206]
	I1205 20:51:21.465637   46700 provision.go:172] copyRemoteCerts
	I1205 20:51:21.465701   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:21.465737   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.468386   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468688   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.468719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468896   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.469092   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.469232   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.469349   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:21.555915   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:21.578545   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:51:21.603058   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:21.624769   46700 provision.go:86] duration metric: configureAuth took 232.761874ms
	I1205 20:51:21.624798   46700 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:21.624972   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:51:21.625065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.627589   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.627953   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.627991   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.628085   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.628300   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628477   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628643   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.628867   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.629237   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.629262   46700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:21.945366   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:21.945398   46700 machine.go:91] provisioned docker machine in 806.257704ms
	I1205 20:51:21.945410   46700 start.go:300] post-start starting for "old-k8s-version-061206" (driver="kvm2")
	I1205 20:51:21.945423   46700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:21.945442   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.945803   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:21.945833   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.948699   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949083   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.949116   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949247   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.949455   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.949642   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.949780   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.036694   46700 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:22.040857   46700 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:22.040887   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:22.040961   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:22.041067   46700 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:22.041167   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:22.050610   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:22.072598   46700 start.go:303] post-start completed in 127.17514ms
	I1205 20:51:22.072621   46700 fix.go:56] fixHost completed within 18.986227859s
	I1205 20:51:22.072650   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.075382   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.075779   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.075809   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.076014   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.076218   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076390   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076548   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.076677   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:22.076979   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:22.076989   46700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:22.191127   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809482.140720971
	
	I1205 20:51:22.191150   46700 fix.go:206] guest clock: 1701809482.140720971
	I1205 20:51:22.191160   46700 fix.go:219] Guest: 2023-12-05 20:51:22.140720971 +0000 UTC Remote: 2023-12-05 20:51:22.072625275 +0000 UTC m=+273.566123117 (delta=68.095696ms)
	I1205 20:51:22.191206   46700 fix.go:190] guest clock delta is within tolerance: 68.095696ms
	I1205 20:51:22.191211   46700 start.go:83] releasing machines lock for "old-k8s-version-061206", held for 19.104851926s
	I1205 20:51:22.191239   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.191530   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:22.194285   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194676   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.194721   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194832   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195352   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195535   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195614   46700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:22.195660   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.195729   46700 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:22.195759   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.198085   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198438   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198493   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198522   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198619   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.198813   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.198893   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198922   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198980   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.199139   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.199172   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.199274   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199426   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.284598   46700 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:22.304917   46700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:22.454449   46700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:22.461344   46700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:22.461409   46700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:22.483106   46700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:22.483130   46700 start.go:475] detecting cgroup driver to use...
	I1205 20:51:22.483202   46700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:22.498157   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:22.510661   46700 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:22.510712   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:22.525004   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:22.538499   46700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:22.652874   46700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:22.787215   46700 docker.go:219] disabling docker service ...
	I1205 20:51:22.787272   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:22.800315   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:22.812031   46700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:22.926202   46700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:23.057043   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:23.072205   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:23.092858   46700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:51:23.092916   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.103613   46700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:23.103680   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.113992   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.124132   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.134007   46700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:23.144404   46700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:23.153679   46700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:23.153735   46700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:23.167935   46700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:23.178944   46700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:23.294314   46700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:23.469887   46700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:23.469957   46700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:23.475308   46700 start.go:543] Will wait 60s for crictl version
	I1205 20:51:23.475384   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:23.479436   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:23.520140   46700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:23.520223   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.572184   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.619296   46700 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1205 20:51:22.215866   46866 main.go:141] libmachine: (no-preload-143651) Calling .Start
	I1205 20:51:22.216026   46866 main.go:141] libmachine: (no-preload-143651) Ensuring networks are active...
	I1205 20:51:22.216719   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network default is active
	I1205 20:51:22.217060   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network mk-no-preload-143651 is active
	I1205 20:51:22.217553   46866 main.go:141] libmachine: (no-preload-143651) Getting domain xml...
	I1205 20:51:22.218160   46866 main.go:141] libmachine: (no-preload-143651) Creating domain...
	I1205 20:51:23.560327   46866 main.go:141] libmachine: (no-preload-143651) Waiting to get IP...
	I1205 20:51:23.561191   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.561601   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.561675   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.561566   47785 retry.go:31] will retry after 269.644015ms: waiting for machine to come up
	I1205 20:51:23.833089   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.833656   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.833695   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.833612   47785 retry.go:31] will retry after 363.018928ms: waiting for machine to come up
	I1205 20:51:24.198250   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.198767   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.198797   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.198717   47785 retry.go:31] will retry after 464.135158ms: waiting for machine to come up
	I1205 20:51:24.664518   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.664945   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.664970   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.664902   47785 retry.go:31] will retry after 383.704385ms: waiting for machine to come up
	I1205 20:51:25.050654   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.051112   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.051142   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.051078   47785 retry.go:31] will retry after 620.614799ms: waiting for machine to come up
	I1205 20:51:25.672997   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.673452   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.673485   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.673394   47785 retry.go:31] will retry after 594.447783ms: waiting for machine to come up
	I1205 20:51:23.620743   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:23.623372   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623672   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:23.623702   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623934   46700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:23.628382   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:23.642698   46700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 20:51:23.642770   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:23.686679   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:23.686776   46700 ssh_runner.go:195] Run: which lz4
	I1205 20:51:23.690994   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:51:23.695445   46700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:51:23.695480   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1205 20:51:25.519961   46700 crio.go:444] Took 1.828999 seconds to copy over tarball
	I1205 20:51:25.520052   46700 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:51:28.545261   46700 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025151809s)
	I1205 20:51:28.545291   46700 crio.go:451] Took 3.025302 seconds to extract the tarball
	I1205 20:51:28.545303   46700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:51:26.269269   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:26.269771   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:26.269815   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:26.269741   47785 retry.go:31] will retry after 872.968768ms: waiting for machine to come up
	I1205 20:51:27.144028   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:27.144505   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:27.144538   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:27.144467   47785 retry.go:31] will retry after 1.067988446s: waiting for machine to come up
	I1205 20:51:28.213709   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:28.214161   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:28.214184   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:28.214111   47785 retry.go:31] will retry after 1.483033238s: waiting for machine to come up
	I1205 20:51:29.699402   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:29.699928   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:29.699973   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:29.699861   47785 retry.go:31] will retry after 1.985034944s: waiting for machine to come up
	I1205 20:51:28.586059   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:28.631610   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:28.631643   46700 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:28.631749   46700 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.631797   46700 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.631754   46700 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.631937   46700 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.632007   46700 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:51:28.631930   46700 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.632029   46700 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.631760   46700 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633385   46700 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633397   46700 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:51:28.633416   46700 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.633494   46700 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.633496   46700 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.633512   46700 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.633518   46700 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.633497   46700 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.789873   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.811118   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.811610   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.818440   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.818470   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1205 20:51:28.820473   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.849060   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.855915   46700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1205 20:51:28.855966   46700 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.856023   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953211   46700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1205 20:51:28.953261   46700 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.953289   46700 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1205 20:51:28.953315   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953325   46700 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.953363   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.968680   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.992735   46700 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1205 20:51:28.992781   46700 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1205 20:51:28.992825   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992847   46700 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1205 20:51:28.992878   46700 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.992907   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992917   46700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1205 20:51:28.992830   46700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1205 20:51:28.992948   46700 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.992980   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.992994   46700 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.993009   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.993029   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992944   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.993064   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:29.193946   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:29.194040   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1205 20:51:29.194095   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1205 20:51:29.194188   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1205 20:51:29.194217   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1205 20:51:29.194257   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:29.194279   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1205 20:51:29.299767   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:51:29.299772   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1205 20:51:29.299836   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1205 20:51:29.299855   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1205 20:51:29.299870   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.304934   46700 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1205 20:51:29.304952   46700 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.305004   46700 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1205 20:51:31.467263   46700 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.162226207s)
	I1205 20:51:31.467295   46700 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1205 20:51:31.467342   46700 cache_images.go:92] LoadImages completed in 2.835682781s
	W1205 20:51:31.467425   46700 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1205 20:51:31.467515   46700 ssh_runner.go:195] Run: crio config
	I1205 20:51:31.527943   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:31.527968   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:31.527989   46700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:51:31.528016   46700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061206 NodeName:old-k8s-version-061206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:51:31.528162   46700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-061206"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-061206
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.116:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:51:31.528265   46700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-061206 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:51:31.528332   46700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1205 20:51:31.538013   46700 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:51:31.538090   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:51:31.547209   46700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:51:31.565720   46700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:51:31.582290   46700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1205 20:51:31.599081   46700 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I1205 20:51:31.603007   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:31.615348   46700 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206 for IP: 192.168.50.116
	I1205 20:51:31.615385   46700 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:31.615582   46700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:51:31.615657   46700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:51:31.615757   46700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.key
	I1205 20:51:31.615846   46700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key.ae4cb88a
	I1205 20:51:31.615902   46700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key
	I1205 20:51:31.616079   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:51:31.616150   46700 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:51:31.616172   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:51:31.616216   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:51:31.616261   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:51:31.616302   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:51:31.616375   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:31.617289   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:51:31.645485   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:51:31.675015   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:51:31.699520   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:51:31.727871   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:51:31.751623   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:51:31.776679   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:51:31.799577   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:51:31.827218   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:51:31.849104   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:51:31.870931   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:51:31.894940   46700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:51:31.912233   46700 ssh_runner.go:195] Run: openssl version
	I1205 20:51:31.918141   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:51:31.928422   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932915   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932985   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.938327   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:51:31.948580   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:51:31.958710   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963091   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963155   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.968667   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:51:31.981987   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:51:31.995793   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001622   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001709   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.008883   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:51:32.021378   46700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:51:32.025902   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:51:32.031917   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:51:32.037649   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:51:32.043121   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:51:32.048806   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:51:32.054266   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:51:32.060014   46700 kubeadm.go:404] StartCluster: {Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:51:32.060131   46700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:51:32.060186   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:32.101244   46700 cri.go:89] found id: ""
	I1205 20:51:32.101317   46700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:51:32.111900   46700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:51:32.111925   46700 kubeadm.go:636] restartCluster start
	I1205 20:51:32.111989   46700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:51:32.121046   46700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.122654   46700 kubeconfig.go:92] found "old-k8s-version-061206" server: "https://192.168.50.116:8443"
	I1205 20:51:32.126231   46700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:51:32.135341   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.135404   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.147308   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.147325   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.147367   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.158453   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.659254   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.659357   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.672490   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:33.159599   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.159693   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.171948   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:31.688072   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:31.688591   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:31.688627   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:31.688516   47785 retry.go:31] will retry after 1.83172898s: waiting for machine to come up
	I1205 20:51:33.521647   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:33.522137   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:33.522167   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:33.522083   47785 retry.go:31] will retry after 3.41334501s: waiting for machine to come up
	I1205 20:51:33.659273   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.659359   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.675427   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.158981   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.159075   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.173025   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.659439   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.659547   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.672184   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.159408   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.159472   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.173149   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.659490   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.659626   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.673261   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.159480   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.159569   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.172185   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.659417   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.659528   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.675853   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.159404   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.159495   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.172824   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.659361   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.659456   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.671599   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:38.158754   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.158834   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.171170   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.939441   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:36.939880   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:36.939905   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:36.939843   47785 retry.go:31] will retry after 3.715659301s: waiting for machine to come up
	I1205 20:51:40.659432   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659901   46866 main.go:141] libmachine: (no-preload-143651) Found IP for machine: 192.168.61.162
	I1205 20:51:40.659937   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has current primary IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659973   46866 main.go:141] libmachine: (no-preload-143651) Reserving static IP address...
	I1205 20:51:40.660324   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.660352   46866 main.go:141] libmachine: (no-preload-143651) Reserved static IP address: 192.168.61.162
	I1205 20:51:40.660372   46866 main.go:141] libmachine: (no-preload-143651) DBG | skip adding static IP to network mk-no-preload-143651 - found existing host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"}
	I1205 20:51:40.660391   46866 main.go:141] libmachine: (no-preload-143651) DBG | Getting to WaitForSSH function...
	I1205 20:51:40.660407   46866 main.go:141] libmachine: (no-preload-143651) Waiting for SSH to be available...
	I1205 20:51:40.662619   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663014   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.663042   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663226   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH client type: external
	I1205 20:51:40.663257   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa (-rw-------)
	I1205 20:51:40.663293   46866 main.go:141] libmachine: (no-preload-143651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:40.663312   46866 main.go:141] libmachine: (no-preload-143651) DBG | About to run SSH command:
	I1205 20:51:40.663328   46866 main.go:141] libmachine: (no-preload-143651) DBG | exit 0
	I1205 20:51:41.891099   47365 start.go:369] acquired machines lock for "default-k8s-diff-port-463614" in 2m25.511348838s
	I1205 20:51:41.891167   47365 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:41.891179   47365 fix.go:54] fixHost starting: 
	I1205 20:51:41.891625   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:41.891666   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:41.910556   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1205 20:51:41.910956   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:41.911447   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:51:41.911474   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:41.911792   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:41.912020   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:51:41.912168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:51:41.913796   47365 fix.go:102] recreateIfNeeded on default-k8s-diff-port-463614: state=Stopped err=<nil>
	I1205 20:51:41.913824   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	W1205 20:51:41.914032   47365 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:41.916597   47365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-463614" ...
	I1205 20:51:40.754683   46866 main.go:141] libmachine: (no-preload-143651) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:40.755055   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetConfigRaw
	I1205 20:51:40.755663   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:40.758165   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758502   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.758534   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758722   46866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:51:40.758916   46866 machine.go:88] provisioning docker machine ...
	I1205 20:51:40.758933   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:40.759160   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759358   46866 buildroot.go:166] provisioning hostname "no-preload-143651"
	I1205 20:51:40.759384   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759555   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.762125   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762513   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.762546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762688   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.762894   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763070   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763211   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.763392   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.763747   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.763761   46866 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143651 && echo "no-preload-143651" | sudo tee /etc/hostname
	I1205 20:51:40.895095   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143651
	
	I1205 20:51:40.895123   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.897864   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898199   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.898236   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898419   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.898629   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898814   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898972   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.899147   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.899454   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.899472   46866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143651/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:41.027721   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:41.027758   46866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:41.027802   46866 buildroot.go:174] setting up certificates
	I1205 20:51:41.027813   46866 provision.go:83] configureAuth start
	I1205 20:51:41.027827   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:41.028120   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.031205   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031561   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.031592   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031715   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.034163   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034531   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.034563   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034697   46866 provision.go:138] copyHostCerts
	I1205 20:51:41.034750   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:41.034767   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:41.034826   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:41.034918   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:41.034925   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:41.034947   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:41.035018   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:41.035029   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:41.035056   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:41.035129   46866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.no-preload-143651 san=[192.168.61.162 192.168.61.162 localhost 127.0.0.1 minikube no-preload-143651]
	I1205 20:51:41.152743   46866 provision.go:172] copyRemoteCerts
	I1205 20:51:41.152808   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:41.152836   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.155830   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156153   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.156181   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156380   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.156587   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.156769   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.156914   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.247182   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 20:51:41.271756   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:41.296485   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:41.317870   46866 provision.go:86] duration metric: configureAuth took 290.041804ms
	I1205 20:51:41.317900   46866 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:41.318059   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:51:41.318130   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.320631   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.320907   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.320935   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.321099   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.321310   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321436   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321558   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.321671   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.321981   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.321998   46866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:41.637500   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:41.637536   46866 machine.go:91] provisioned docker machine in 878.607379ms
	I1205 20:51:41.637551   46866 start.go:300] post-start starting for "no-preload-143651" (driver="kvm2")
	I1205 20:51:41.637565   46866 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:41.637586   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.637928   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:41.637959   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.640546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.640941   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.640969   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.641158   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.641348   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.641521   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.641701   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.733255   46866 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:41.737558   46866 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:41.737582   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:41.737656   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:41.737747   46866 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:41.737867   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:41.747400   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:41.769318   46866 start.go:303] post-start completed in 131.753103ms
	I1205 20:51:41.769341   46866 fix.go:56] fixHost completed within 19.577961747s
	I1205 20:51:41.769360   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.772098   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772433   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.772469   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772614   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.772830   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773000   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773141   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.773329   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.773689   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.773701   46866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:41.890932   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809501.865042950
	
	I1205 20:51:41.890965   46866 fix.go:206] guest clock: 1701809501.865042950
	I1205 20:51:41.890977   46866 fix.go:219] Guest: 2023-12-05 20:51:41.86504295 +0000 UTC Remote: 2023-12-05 20:51:41.769344785 +0000 UTC m=+276.111345943 (delta=95.698165ms)
	I1205 20:51:41.891000   46866 fix.go:190] guest clock delta is within tolerance: 95.698165ms
	I1205 20:51:41.891005   46866 start.go:83] releasing machines lock for "no-preload-143651", held for 19.699651094s
	I1205 20:51:41.891037   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.891349   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.893760   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894151   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.894188   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894393   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.894953   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895147   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895233   46866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:41.895275   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.895379   46866 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:41.895409   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.897961   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898107   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898353   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898396   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898610   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898663   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898693   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898781   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.898835   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.899138   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.899149   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.899296   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.987662   46866 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:42.008983   46866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:42.150028   46866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:42.156643   46866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:42.156719   46866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:42.175508   46866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:42.175534   46866 start.go:475] detecting cgroup driver to use...
	I1205 20:51:42.175620   46866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:42.189808   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:42.202280   46866 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:42.202342   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:42.220906   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:42.238796   46866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:42.364162   46866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:42.493990   46866 docker.go:219] disabling docker service ...
	I1205 20:51:42.494066   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:42.507419   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:42.519769   46866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:42.639608   46866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:42.764015   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:42.776984   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:42.797245   46866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:51:42.797307   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.807067   46866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:42.807150   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.816699   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.825896   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.835144   46866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:42.844910   46866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:42.853054   46866 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:42.853127   46866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:42.865162   46866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:42.874929   46866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:42.989397   46866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:43.173537   46866 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:43.173613   46866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:43.179392   46866 start.go:543] Will wait 60s for crictl version
	I1205 20:51:43.179449   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.183693   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:43.233790   46866 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:43.233862   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.291711   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.343431   46866 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1205 20:51:38.658807   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.658875   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.672580   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.159258   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.159363   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.172800   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.659451   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.659544   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.673718   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.159346   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.159436   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.172524   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.659093   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.659170   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.671848   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.159453   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.159534   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.171845   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.659456   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.659520   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.671136   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:42.136008   46700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:51:42.136039   46700 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:51:42.136049   46700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:51:42.136130   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:42.183279   46700 cri.go:89] found id: ""
	I1205 20:51:42.183375   46700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:51:42.202550   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:51:42.213978   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:51:42.214041   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223907   46700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223932   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:42.349280   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.257422   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.483371   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.345205   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:43.348398   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348738   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:43.348769   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348965   46866 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:43.354536   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:43.368512   46866 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 20:51:43.368550   46866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:43.411924   46866 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1205 20:51:43.411956   46866 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:43.412050   46866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.412030   46866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.412084   46866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.412097   46866 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1205 20:51:43.412134   46866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.412072   46866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.412021   46866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.412056   46866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413334   46866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.413403   46866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413481   46866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.413539   46866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.413554   46866 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1205 20:51:43.413337   46866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.413624   46866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.413405   46866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.563942   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.565063   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.567071   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.572782   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.577279   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.579820   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1205 20:51:43.591043   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.735723   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.735988   46866 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1205 20:51:43.736032   46866 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.736073   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.791375   46866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1205 20:51:43.791424   46866 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.791473   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.810236   46866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1205 20:51:43.810290   46866 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.810339   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841046   46866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1205 20:51:43.841255   46866 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.841347   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841121   46866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1205 20:51:43.841565   46866 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.841635   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866289   46866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1205 20:51:43.866344   46866 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.866368   46866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:51:43.866390   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866417   46866 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.866465   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866469   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.866597   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.866685   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.866780   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.866853   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.994581   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994691   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994757   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1205 20:51:43.994711   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.994792   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.994849   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:44.000411   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.000501   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.008960   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1205 20:51:44.009001   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:44.073217   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073238   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073275   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1205 20:51:44.073282   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073304   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073376   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:51:44.073397   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073439   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073444   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:44.073471   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1205 20:51:44.073504   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1205 20:51:41.918223   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Start
	I1205 20:51:41.918414   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring networks are active...
	I1205 20:51:41.919085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network default is active
	I1205 20:51:41.919401   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network mk-default-k8s-diff-port-463614 is active
	I1205 20:51:41.919733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Getting domain xml...
	I1205 20:51:41.920368   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Creating domain...
	I1205 20:51:43.304717   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting to get IP...
	I1205 20:51:43.305837   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.306202   47900 retry.go:31] will retry after 208.55347ms: waiting for machine to come up
	I1205 20:51:43.516782   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517269   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517297   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.517232   47900 retry.go:31] will retry after 370.217439ms: waiting for machine to come up
	I1205 20:51:43.889085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889580   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889615   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.889531   47900 retry.go:31] will retry after 395.420735ms: waiting for machine to come up
	I1205 20:51:44.286007   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286563   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.286481   47900 retry.go:31] will retry after 437.496548ms: waiting for machine to come up
	I1205 20:51:44.726145   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726803   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726850   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.726748   47900 retry.go:31] will retry after 628.791518ms: waiting for machine to come up
	I1205 20:51:45.357823   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358285   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:45.358232   47900 retry.go:31] will retry after 661.164562ms: waiting for machine to come up
	I1205 20:51:46.021711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022151   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022177   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:46.022120   47900 retry.go:31] will retry after 1.093521736s: waiting for machine to come up
	I1205 20:51:43.607841   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.765000   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:51:43.765097   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:43.776916   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.306400   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.805894   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.305832   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.332834   46700 api_server.go:72] duration metric: took 1.567832932s to wait for apiserver process to appear ...
	I1205 20:51:45.332867   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:51:45.332884   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:46.537183   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.463870183s)
	I1205 20:51:46.537256   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1205 20:51:46.537311   46866 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:46.537336   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.46384231s)
	I1205 20:51:46.537260   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.463842778s)
	I1205 20:51:46.537373   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:51:46.537394   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1205 20:51:46.537411   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:50.326248   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.788789868s)
	I1205 20:51:50.326299   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1205 20:51:50.326337   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:50.326419   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:47.117386   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117831   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:47.117800   47900 retry.go:31] will retry after 1.255113027s: waiting for machine to come up
	I1205 20:51:48.375199   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375692   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:48.375655   47900 retry.go:31] will retry after 1.65255216s: waiting for machine to come up
	I1205 20:51:50.029505   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029904   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029933   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:50.029860   47900 retry.go:31] will retry after 2.072960988s: waiting for machine to come up
	I1205 20:51:50.334417   46700 api_server.go:269] stopped: https://192.168.50.116:8443/healthz: Get "https://192.168.50.116:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:51:50.334459   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.286979   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:51:52.287013   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:51:52.787498   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.871766   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:52.871803   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.287974   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.301921   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:53.301962   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.787781   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.799426   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:51:53.809064   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:51:53.809101   46700 api_server.go:131] duration metric: took 8.476226007s to wait for apiserver health ...
	I1205 20:51:53.809112   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:53.809120   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:53.811188   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:51:53.496825   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.170377466s)
	I1205 20:51:53.496856   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1205 20:51:53.496877   46866 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:53.496925   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:55.657835   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.160865472s)
	I1205 20:51:55.657869   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1205 20:51:55.657898   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:55.657955   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:52.104758   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105274   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105301   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:52.105232   47900 retry.go:31] will retry after 2.172151449s: waiting for machine to come up
	I1205 20:51:54.279576   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280091   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:54.280054   47900 retry.go:31] will retry after 3.042324499s: waiting for machine to come up
	I1205 20:51:53.812841   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:51:53.835912   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:51:53.920892   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:51:53.943982   46700 system_pods.go:59] 7 kube-system pods found
	I1205 20:51:53.944026   46700 system_pods.go:61] "coredns-5644d7b6d9-kqhgk" [473e53e3-a0bd-4dcb-88c1-d61e9cc3e686] Running
	I1205 20:51:53.944034   46700 system_pods.go:61] "etcd-old-k8s-version-061206" [a2a6a459-41a3-49e3-b32e-a091317390ea] Running
	I1205 20:51:53.944041   46700 system_pods.go:61] "kube-apiserver-old-k8s-version-061206" [9cf24995-fccb-47e4-8d4a-870198b7c82f] Running
	I1205 20:51:53.944054   46700 system_pods.go:61] "kube-controller-manager-old-k8s-version-061206" [225a4a8b-2b6e-46f4-8bd9-9a375b05c23c] Pending
	I1205 20:51:53.944061   46700 system_pods.go:61] "kube-proxy-r5n6g" [5db8876d-ecff-40b3-a61d-aeaf7870166c] Running
	I1205 20:51:53.944068   46700 system_pods.go:61] "kube-scheduler-old-k8s-version-061206" [de56d925-45b3-4c36-b2c2-c90938793aa2] Running
	I1205 20:51:53.944075   46700 system_pods.go:61] "storage-provisioner" [d5d57d93-f94b-4a3e-8c65-25cd4d71b9d5] Running
	I1205 20:51:53.944083   46700 system_pods.go:74] duration metric: took 23.165628ms to wait for pod list to return data ...
	I1205 20:51:53.944093   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:51:53.956907   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:51:53.956949   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:51:53.956964   46700 node_conditions.go:105] duration metric: took 12.864098ms to run NodePressure ...
	I1205 20:51:53.956986   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:54.482145   46700 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:51:54.492629   46700 retry.go:31] will retry after 326.419845ms: kubelet not initialised
	I1205 20:51:54.826701   46700 retry.go:31] will retry after 396.475289ms: kubelet not initialised
	I1205 20:51:55.228971   46700 retry.go:31] will retry after 752.153604ms: kubelet not initialised
	I1205 20:51:55.987713   46700 retry.go:31] will retry after 881.822561ms: kubelet not initialised
	I1205 20:51:56.877407   46700 retry.go:31] will retry after 824.757816ms: kubelet not initialised
	I1205 20:51:57.707927   46700 retry.go:31] will retry after 2.392241385s: kubelet not initialised
	I1205 20:51:58.643374   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.985387711s)
	I1205 20:51:58.643408   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1205 20:51:58.643434   46866 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:58.643500   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:59.407245   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:51:59.407282   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:59.407333   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:57.324016   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324534   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324565   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:57.324482   47900 retry.go:31] will retry after 3.449667479s: waiting for machine to come up
	I1205 20:52:00.776644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777141   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Found IP for machine: 192.168.39.27
	I1205 20:52:00.777175   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777186   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserving static IP address...
	I1205 20:52:00.777825   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserved static IP address: 192.168.39.27
	I1205 20:52:00.777878   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.777892   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for SSH to be available...
	I1205 20:52:00.777918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | skip adding static IP to network mk-default-k8s-diff-port-463614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"}
	I1205 20:52:00.777929   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Getting to WaitForSSH function...
	I1205 20:52:00.780317   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.780729   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH client type: external
	I1205 20:52:00.780909   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa (-rw-------)
	I1205 20:52:00.780940   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:00.780959   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | About to run SSH command:
	I1205 20:52:00.780980   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | exit 0
	I1205 20:52:00.922857   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:00.923204   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetConfigRaw
	I1205 20:52:00.923973   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:00.927405   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.927885   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.927918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.928217   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:52:00.928469   47365 machine.go:88] provisioning docker machine ...
	I1205 20:52:00.928497   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:00.928735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.928912   47365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-463614"
	I1205 20:52:00.928938   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.929092   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:00.931664   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932096   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.932130   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:00.932496   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932672   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932822   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:00.932990   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:00.933401   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:00.933420   47365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-463614 && echo "default-k8s-diff-port-463614" | sudo tee /etc/hostname
	I1205 20:52:01.078295   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-463614
	
	I1205 20:52:01.078332   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.081604   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082051   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.082079   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082240   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.082492   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.083034   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.083506   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.083535   47365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-463614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-463614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-463614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:01.215856   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:01.215884   47365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:01.215912   47365 buildroot.go:174] setting up certificates
	I1205 20:52:01.215927   47365 provision.go:83] configureAuth start
	I1205 20:52:01.215947   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:01.216246   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:01.219169   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219465   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.219503   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.221768   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222137   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.222171   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222410   47365 provision.go:138] copyHostCerts
	I1205 20:52:01.222493   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:01.222508   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:01.222568   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:01.222686   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:01.222717   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:01.222757   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:01.222825   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:01.222832   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:01.222856   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:01.222921   47365 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-463614 san=[192.168.39.27 192.168.39.27 localhost 127.0.0.1 minikube default-k8s-diff-port-463614]
	I1205 20:52:02.247282   46374 start.go:369] acquired machines lock for "embed-certs-331495" in 54.15977635s
	I1205 20:52:02.247348   46374 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:52:02.247360   46374 fix.go:54] fixHost starting: 
	I1205 20:52:02.247794   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:02.247830   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:02.265529   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I1205 20:52:02.265970   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:02.266457   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:52:02.266484   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:02.266825   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:02.267016   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:02.267185   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:52:02.268838   46374 fix.go:102] recreateIfNeeded on embed-certs-331495: state=Stopped err=<nil>
	I1205 20:52:02.268859   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	W1205 20:52:02.269010   46374 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:52:02.270658   46374 out.go:177] * Restarting existing kvm2 VM for "embed-certs-331495" ...
	I1205 20:52:00.114757   46700 retry.go:31] will retry after 2.136164682s: kubelet not initialised
	I1205 20:52:02.258242   46700 retry.go:31] will retry after 4.673214987s: kubelet not initialised
	I1205 20:52:01.474739   47365 provision.go:172] copyRemoteCerts
	I1205 20:52:01.474804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:01.474834   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.477249   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477632   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.477659   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477908   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.478119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.478313   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.478463   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:01.569617   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:01.594120   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 20:52:01.618066   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:52:01.643143   47365 provision.go:86] duration metric: configureAuth took 427.201784ms
	I1205 20:52:01.643169   47365 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:01.643353   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:01.643435   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.646320   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.646821   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.646881   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.647001   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.647206   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647407   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647555   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.647721   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.648105   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.648135   47365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:01.996428   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:01.996456   47365 machine.go:91] provisioned docker machine in 1.067968652s
	I1205 20:52:01.996468   47365 start.go:300] post-start starting for "default-k8s-diff-port-463614" (driver="kvm2")
	I1205 20:52:01.996482   47365 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:01.996502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:01.996804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:01.996829   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.999880   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000345   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.000378   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.000733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.000872   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.001041   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.088194   47365 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:02.092422   47365 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:02.092447   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:02.092522   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:02.092607   47365 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:02.092692   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:02.100847   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:02.125282   47365 start.go:303] post-start completed in 128.798422ms
	I1205 20:52:02.125308   47365 fix.go:56] fixHost completed within 20.234129302s
	I1205 20:52:02.125334   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.128159   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128506   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.128539   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.128970   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129157   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129330   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.129505   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:02.129980   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:02.130001   47365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:02.247134   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809522.185244520
	
	I1205 20:52:02.247160   47365 fix.go:206] guest clock: 1701809522.185244520
	I1205 20:52:02.247170   47365 fix.go:219] Guest: 2023-12-05 20:52:02.18524452 +0000 UTC Remote: 2023-12-05 20:52:02.125313647 +0000 UTC m=+165.907305797 (delta=59.930873ms)
	I1205 20:52:02.247193   47365 fix.go:190] guest clock delta is within tolerance: 59.930873ms
	I1205 20:52:02.247199   47365 start.go:83] releasing machines lock for "default-k8s-diff-port-463614", held for 20.356057608s
	I1205 20:52:02.247233   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.247561   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:02.250476   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.250918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.250952   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.251123   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.251833   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252026   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252117   47365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:02.252168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.252434   47365 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:02.252461   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.255221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255382   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.255750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.255949   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.256004   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.256060   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256278   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.256288   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256453   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256447   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.256586   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256698   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.343546   47365 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:02.368171   47365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:02.518472   47365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:02.524733   47365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:02.524808   47365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:02.541607   47365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:02.541632   47365 start.go:475] detecting cgroup driver to use...
	I1205 20:52:02.541703   47365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:02.560122   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:02.575179   47365 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:02.575244   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:02.591489   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:02.606022   47365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:02.711424   47365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:02.828436   47365 docker.go:219] disabling docker service ...
	I1205 20:52:02.828515   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:02.844209   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:02.860693   47365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:02.979799   47365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:03.111682   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:03.128706   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:03.147984   47365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:03.148057   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.160998   47365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:03.161068   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.173347   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.185126   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.195772   47365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:03.206308   47365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:03.215053   47365 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:03.215103   47365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:03.227755   47365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:03.237219   47365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:03.369712   47365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:03.561508   47365 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:03.561575   47365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:03.569369   47365 start.go:543] Will wait 60s for crictl version
	I1205 20:52:03.569437   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:52:03.575388   47365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:03.618355   47365 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:03.618458   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.670174   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.716011   47365 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:02.272006   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Start
	I1205 20:52:02.272171   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring networks are active...
	I1205 20:52:02.272890   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network default is active
	I1205 20:52:02.273264   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network mk-embed-certs-331495 is active
	I1205 20:52:02.273634   46374 main.go:141] libmachine: (embed-certs-331495) Getting domain xml...
	I1205 20:52:02.274223   46374 main.go:141] libmachine: (embed-certs-331495) Creating domain...
	I1205 20:52:03.644135   46374 main.go:141] libmachine: (embed-certs-331495) Waiting to get IP...
	I1205 20:52:03.645065   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.645451   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.645561   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.645439   48036 retry.go:31] will retry after 246.973389ms: waiting for machine to come up
	I1205 20:52:03.894137   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.894708   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.894813   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.894768   48036 retry.go:31] will retry after 353.753964ms: waiting for machine to come up
	I1205 20:52:04.250496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.251201   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.251231   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.251151   48036 retry.go:31] will retry after 370.705045ms: waiting for machine to come up
	I1205 20:52:04.623959   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.624532   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.624563   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.624488   48036 retry.go:31] will retry after 409.148704ms: waiting for machine to come up
	I1205 20:52:05.035991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.036492   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.036521   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.036458   48036 retry.go:31] will retry after 585.089935ms: waiting for machine to come up
	I1205 20:52:01.272757   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.865397348s)
	I1205 20:52:01.272791   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1205 20:52:01.272823   46866 cache_images.go:123] Successfully loaded all cached images
	I1205 20:52:01.272830   46866 cache_images.go:92] LoadImages completed in 17.860858219s
	I1205 20:52:01.272913   46866 ssh_runner.go:195] Run: crio config
	I1205 20:52:01.346651   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:01.346671   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:01.346689   46866 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:01.346715   46866 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.162 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143651 NodeName:no-preload-143651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:01.346890   46866 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143651"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:01.347005   46866 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:01.347080   46866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1205 20:52:01.360759   46866 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:01.360818   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:01.372537   46866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1205 20:52:01.389057   46866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1205 20:52:01.405689   46866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1205 20:52:01.426066   46866 ssh_runner.go:195] Run: grep 192.168.61.162	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:01.430363   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:01.443015   46866 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651 for IP: 192.168.61.162
	I1205 20:52:01.443049   46866 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:01.443202   46866 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:01.443254   46866 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:01.443337   46866 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.key
	I1205 20:52:01.443423   46866 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key.5bf94fca
	I1205 20:52:01.443477   46866 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key
	I1205 20:52:01.443626   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:01.443664   46866 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:01.443689   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:01.443729   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:01.443768   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:01.443800   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:01.443868   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:01.444505   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:01.471368   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:01.495925   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:01.520040   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:01.542515   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:01.565061   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:01.592011   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:01.615244   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:01.640425   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:01.666161   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:01.688991   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:01.711978   46866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:01.728642   46866 ssh_runner.go:195] Run: openssl version
	I1205 20:52:01.734248   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:01.746741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751589   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751647   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.757299   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:01.768280   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:01.779234   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783897   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783961   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.789668   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:01.800797   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:01.814741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819713   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819774   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.825538   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:01.836443   46866 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:01.842191   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:01.850025   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:01.857120   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:01.863507   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:01.870887   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:01.878657   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:01.886121   46866 kubeadm.go:404] StartCluster: {Name:no-preload-143651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:01.886245   46866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:01.886311   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:01.933026   46866 cri.go:89] found id: ""
	I1205 20:52:01.933096   46866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:01.946862   46866 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:01.946891   46866 kubeadm.go:636] restartCluster start
	I1205 20:52:01.946950   46866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:01.959468   46866 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.960467   46866 kubeconfig.go:92] found "no-preload-143651" server: "https://192.168.61.162:8443"
	I1205 20:52:01.962804   46866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:01.975351   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.975427   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:01.988408   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.988439   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.988493   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.001669   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:02.502716   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:02.502781   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.515220   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.002777   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.002843   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.016667   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.501748   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.501840   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.515761   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.001797   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.001873   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.018140   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.502697   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.502791   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.518059   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.002414   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.002515   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.021107   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.502637   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.502733   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.521380   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.717595   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:03.720774   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721210   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:03.721242   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721414   47365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:03.726330   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:03.738414   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:03.738479   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:03.777318   47365 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:03.777380   47365 ssh_runner.go:195] Run: which lz4
	I1205 20:52:03.781463   47365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:03.785728   47365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:03.785759   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:05.712791   47365 crio.go:444] Took 1.931355 seconds to copy over tarball
	I1205 20:52:05.712888   47365 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:06.939842   46700 retry.go:31] will retry after 8.345823287s: kubelet not initialised
	I1205 20:52:05.623348   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.623894   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.623928   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.623844   48036 retry.go:31] will retry after 819.796622ms: waiting for machine to come up
	I1205 20:52:06.445034   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:06.445471   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:06.445504   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:06.445427   48036 retry.go:31] will retry after 716.017152ms: waiting for machine to come up
	I1205 20:52:07.162965   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:07.163496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:07.163526   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:07.163445   48036 retry.go:31] will retry after 1.085415508s: waiting for machine to come up
	I1205 20:52:08.250373   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:08.250962   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:08.250999   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:08.250909   48036 retry.go:31] will retry after 1.128069986s: waiting for machine to come up
	I1205 20:52:09.380537   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:09.381001   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:09.381027   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:09.380964   48036 retry.go:31] will retry after 1.475239998s: waiting for machine to come up
	I1205 20:52:06.002168   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.002247   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.025123   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:06.502715   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.502831   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.519395   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.001937   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.002068   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.019028   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.501962   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.502059   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.515098   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.002769   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.002909   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.020137   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.501807   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.501949   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.518082   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.002421   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.002505   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.016089   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.502171   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.502261   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.515449   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.001975   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.002117   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.013831   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.502398   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.502481   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.514939   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.946250   47365 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.233316669s)
	I1205 20:52:08.946291   47365 crio.go:451] Took 3.233468 seconds to extract the tarball
	I1205 20:52:08.946304   47365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:08.988526   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:09.041782   47365 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:09.041812   47365 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:09.041908   47365 ssh_runner.go:195] Run: crio config
	I1205 20:52:09.105852   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:09.105879   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:09.105901   47365 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:09.105926   47365 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-463614 NodeName:default-k8s-diff-port-463614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:09.106114   47365 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-463614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:09.106218   47365 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-463614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1205 20:52:09.106295   47365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:09.116476   47365 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:09.116569   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:09.125304   47365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1205 20:52:09.141963   47365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:09.158882   47365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1205 20:52:09.177829   47365 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:09.181803   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:09.194791   47365 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614 for IP: 192.168.39.27
	I1205 20:52:09.194824   47365 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:09.194968   47365 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:09.195028   47365 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:09.195135   47365 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.key
	I1205 20:52:09.195225   47365 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key.310d49ea
	I1205 20:52:09.195287   47365 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key
	I1205 20:52:09.195457   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:09.195502   47365 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:09.195519   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:09.195561   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:09.195594   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:09.195625   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:09.195698   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:09.196495   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:09.221945   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:09.249557   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:09.279843   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:09.309602   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:09.338163   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:09.365034   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:09.394774   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:09.420786   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:09.445787   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:09.474838   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:09.499751   47365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:09.523805   47365 ssh_runner.go:195] Run: openssl version
	I1205 20:52:09.530143   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:09.545184   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550681   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550751   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.558670   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:09.573789   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:09.585134   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591055   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591136   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.597286   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:09.608901   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:09.620949   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626190   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626267   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.632394   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:09.645362   47365 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:09.650768   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:09.657084   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:09.663183   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:09.669093   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:09.675365   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:09.681992   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:09.688849   47365 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:09.688963   47365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:09.689035   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:09.730999   47365 cri.go:89] found id: ""
	I1205 20:52:09.731061   47365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:09.741609   47365 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:09.741640   47365 kubeadm.go:636] restartCluster start
	I1205 20:52:09.741700   47365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:09.751658   47365 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.752671   47365 kubeconfig.go:92] found "default-k8s-diff-port-463614" server: "https://192.168.39.27:8444"
	I1205 20:52:09.755361   47365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:09.765922   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.766006   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.781956   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.781983   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.782033   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.795265   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.295986   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.296088   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.312309   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.795832   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.795959   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.808880   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.857552   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:10.857968   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:10.858002   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:10.857911   48036 retry.go:31] will retry after 1.882319488s: waiting for machine to come up
	I1205 20:52:12.741608   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:12.742051   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:12.742081   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:12.742006   48036 retry.go:31] will retry after 2.598691975s: waiting for machine to come up
	I1205 20:52:15.343818   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:15.344360   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:15.344385   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:15.344306   48036 retry.go:31] will retry after 3.313897625s: waiting for machine to come up
	I1205 20:52:11.002661   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.002740   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.014931   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.502548   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.502621   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.516090   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.975668   46866 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:11.975724   46866 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:11.975739   46866 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:11.975820   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:12.032265   46866 cri.go:89] found id: ""
	I1205 20:52:12.032364   46866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:12.050705   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:12.060629   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:12.060726   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.073988   46866 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.074015   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:12.209842   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.318235   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108353469s)
	I1205 20:52:13.318280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.518224   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.606064   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.695764   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:13.695849   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:13.718394   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.237554   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.737066   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:15.236911   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:11.295662   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.295754   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.308889   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.796322   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.796432   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.812351   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.295433   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.295527   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.308482   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.795889   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.795961   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.812458   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.296017   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.296114   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.312758   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.796111   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.796256   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.812247   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.295726   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.295808   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.308712   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.796358   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.796439   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.813173   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.295541   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.295632   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.312665   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.796231   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.796378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.816767   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.292395   46700 retry.go:31] will retry after 12.309806949s: kubelet not initialised
	I1205 20:52:18.659431   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:18.659915   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:18.659944   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:18.659867   48036 retry.go:31] will retry after 3.672641091s: waiting for machine to come up
	I1205 20:52:15.737064   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.237656   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.263010   46866 api_server.go:72] duration metric: took 2.567245952s to wait for apiserver process to appear ...
	I1205 20:52:16.263039   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:16.263057   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.286115   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.286153   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.286173   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.334683   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.334710   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.835110   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.840833   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:19.840866   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.335444   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.355923   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:20.355956   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.835568   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.840974   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:52:20.849239   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:52:20.849274   46866 api_server.go:131] duration metric: took 4.586226618s to wait for apiserver health ...
	I1205 20:52:20.849284   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:20.849323   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:20.850829   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:16.295650   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.295729   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.312742   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:16.796283   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.796364   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.812822   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.295879   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.295953   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.312254   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.795437   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.795519   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.808598   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.296187   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.296266   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.312808   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.796368   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.796480   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.812986   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.295511   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:19.295576   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:19.308830   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.766569   47365 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:19.766653   47365 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:19.766673   47365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:19.766748   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:19.820510   47365 cri.go:89] found id: ""
	I1205 20:52:19.820590   47365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:19.842229   47365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:19.853234   47365 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:19.853293   47365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866181   47365 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866220   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:20.022098   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.165439   47365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143295704s)
	I1205 20:52:21.165472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:22.333575   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334146   46374 main.go:141] libmachine: (embed-certs-331495) Found IP for machine: 192.168.72.180
	I1205 20:52:22.334189   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has current primary IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334205   46374 main.go:141] libmachine: (embed-certs-331495) Reserving static IP address...
	I1205 20:52:22.334654   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.334686   46374 main.go:141] libmachine: (embed-certs-331495) DBG | skip adding static IP to network mk-embed-certs-331495 - found existing host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"}
	I1205 20:52:22.334699   46374 main.go:141] libmachine: (embed-certs-331495) Reserved static IP address: 192.168.72.180
	I1205 20:52:22.334717   46374 main.go:141] libmachine: (embed-certs-331495) Waiting for SSH to be available...
	I1205 20:52:22.334727   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Getting to WaitForSSH function...
	I1205 20:52:22.337411   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337832   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.337863   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337976   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH client type: external
	I1205 20:52:22.338005   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa (-rw-------)
	I1205 20:52:22.338038   46374 main.go:141] libmachine: (embed-certs-331495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:22.338057   46374 main.go:141] libmachine: (embed-certs-331495) DBG | About to run SSH command:
	I1205 20:52:22.338071   46374 main.go:141] libmachine: (embed-certs-331495) DBG | exit 0
	I1205 20:52:22.430984   46374 main.go:141] libmachine: (embed-certs-331495) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:22.431374   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetConfigRaw
	I1205 20:52:22.432120   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.435317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.435737   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.435772   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.436044   46374 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/config.json ...
	I1205 20:52:22.436283   46374 machine.go:88] provisioning docker machine ...
	I1205 20:52:22.436304   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:22.436519   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436687   46374 buildroot.go:166] provisioning hostname "embed-certs-331495"
	I1205 20:52:22.436707   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436882   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.439595   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.439966   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.439998   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.440179   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.440392   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440558   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440718   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.440891   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.441216   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.441235   46374 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-331495 && echo "embed-certs-331495" | sudo tee /etc/hostname
	I1205 20:52:22.584600   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-331495
	
	I1205 20:52:22.584662   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.587640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588053   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.588083   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588255   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.588469   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.588985   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.589340   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.589369   46374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-331495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-331495/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-331495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:22.722352   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:22.722390   46374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:22.722437   46374 buildroot.go:174] setting up certificates
	I1205 20:52:22.722459   46374 provision.go:83] configureAuth start
	I1205 20:52:22.722475   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.722776   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.725826   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726254   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.726313   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726616   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.729267   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729606   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.729640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729798   46374 provision.go:138] copyHostCerts
	I1205 20:52:22.729843   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:22.729853   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:22.729907   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:22.729986   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:22.729994   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:22.730019   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:22.730090   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:22.730100   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:22.730128   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:22.730188   46374 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.embed-certs-331495 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-331495]
	I1205 20:52:22.795361   46374 provision.go:172] copyRemoteCerts
	I1205 20:52:22.795435   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:22.795464   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.798629   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799006   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.799052   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799222   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.799448   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.799617   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.799774   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:22.892255   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:52:22.929940   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:52:22.966087   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:22.998887   46374 provision.go:86] duration metric: configureAuth took 276.409362ms
	I1205 20:52:22.998937   46374 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:22.999160   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:22.999253   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.002604   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.002992   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.003033   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.003265   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.003516   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003723   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003916   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.004090   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.004540   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.004568   46374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:23.371418   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:23.371450   46374 machine.go:91] provisioned docker machine in 935.149228ms
	I1205 20:52:23.371464   46374 start.go:300] post-start starting for "embed-certs-331495" (driver="kvm2")
	I1205 20:52:23.371477   46374 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:23.371500   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.371872   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:23.371911   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.375440   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.375960   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.375991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.376130   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.376328   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.376512   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.376693   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.472304   46374 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:23.477044   46374 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:23.477070   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:23.477177   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:23.477287   46374 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:23.477425   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:23.493987   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:23.519048   46374 start.go:303] post-start completed in 147.566985ms
	I1205 20:52:23.519082   46374 fix.go:56] fixHost completed within 21.27172194s
	I1205 20:52:23.519107   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.522260   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522700   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.522735   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522967   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.523238   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523456   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.523893   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.524220   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.524239   46374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:23.648717   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809543.591713401
	
	I1205 20:52:23.648743   46374 fix.go:206] guest clock: 1701809543.591713401
	I1205 20:52:23.648755   46374 fix.go:219] Guest: 2023-12-05 20:52:23.591713401 +0000 UTC Remote: 2023-12-05 20:52:23.519087629 +0000 UTC m=+358.020977056 (delta=72.625772ms)
	I1205 20:52:23.648800   46374 fix.go:190] guest clock delta is within tolerance: 72.625772ms
	I1205 20:52:23.648808   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 21.401495157s
	I1205 20:52:23.648838   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.649149   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:23.652098   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652534   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.652577   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652773   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653350   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653552   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653655   46374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:23.653709   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.653948   46374 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:23.653989   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.657266   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657547   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657637   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657669   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657946   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657957   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.657970   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.658236   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.658250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658438   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658532   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658756   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.658785   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658933   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.777965   46374 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:23.784199   46374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:23.948621   46374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:23.957081   46374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:23.957163   46374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:23.978991   46374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:23.979023   46374 start.go:475] detecting cgroup driver to use...
	I1205 20:52:23.979124   46374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:23.997195   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:24.015420   46374 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:24.015494   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:24.031407   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:24.047587   46374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:24.200996   46374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:24.332015   46374 docker.go:219] disabling docker service ...
	I1205 20:52:24.332095   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:24.350586   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:24.367457   46374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:24.545467   46374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:24.733692   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:24.748391   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:24.768555   46374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:24.768644   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.780668   46374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:24.780740   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.792671   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.806500   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.818442   46374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:24.829822   46374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:24.842070   46374 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:24.842138   46374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:24.857370   46374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:24.867993   46374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:25.024629   46374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:25.231556   46374 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:25.231630   46374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:25.237863   46374 start.go:543] Will wait 60s for crictl version
	I1205 20:52:25.237929   46374 ssh_runner.go:195] Run: which crictl
	I1205 20:52:25.242501   46374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:25.289507   46374 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:25.289591   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.340432   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.398354   46374 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:25.399701   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:25.402614   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.402997   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:25.403029   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.403259   46374 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:25.407873   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:25.420725   46374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:25.420801   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:25.468651   46374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:25.468726   46374 ssh_runner.go:195] Run: which lz4
	I1205 20:52:25.473976   46374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:25.478835   46374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:25.478871   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:20.852220   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:20.867614   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:20.892008   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:20.912985   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:20.913027   46866 system_pods.go:61] "coredns-76f75df574-8d24t" [10265d3b-ddf0-4559-8194-d42563df88a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:20.913038   46866 system_pods.go:61] "etcd-no-preload-143651" [a6b62f23-a944-41ec-b465-6027fcf1f413] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:20.913051   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [5a6b5874-6c6b-4ed6-aa68-8e7fc35a486e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:20.913061   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [42b01d8c-2d8f-467e-8183-eef2e6f73b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:20.913074   46866 system_pods.go:61] "kube-proxy-mltvl" [9adea5d0-e824-40ff-b5b4-16f84fd439ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:20.913085   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [17474fca-8390-48db-bebe-47c1e2cf7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:20.913107   46866 system_pods.go:61] "metrics-server-57f55c9bc5-mhxpn" [3eb25a58-bea3-4266-9bf8-8f186ee65e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:20.913120   46866 system_pods.go:61] "storage-provisioner" [cfe9d24c-a534-4778-980b-99f7addcf0b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:20.913132   46866 system_pods.go:74] duration metric: took 21.101691ms to wait for pod list to return data ...
	I1205 20:52:20.913143   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:20.917108   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:20.917140   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:20.917156   46866 node_conditions.go:105] duration metric: took 4.003994ms to run NodePressure ...
	I1205 20:52:20.917180   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.315507   46866 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321271   46866 kubeadm.go:787] kubelet initialised
	I1205 20:52:21.321301   46866 kubeadm.go:788] duration metric: took 5.763416ms waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321310   46866 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:21.327760   46866 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:23.354192   46866 pod_ready.go:102] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:25.353274   46866 pod_ready.go:92] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:25.353356   46866 pod_ready.go:81] duration metric: took 4.02555842s waiting for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:25.353372   46866 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:21.402472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.498902   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.585971   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:21.586073   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:21.605993   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.120378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.620326   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.119466   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.619549   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.120228   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.143130   47365 api_server.go:72] duration metric: took 2.557157382s to wait for apiserver process to appear ...
	I1205 20:52:24.143163   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:24.143182   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:27.608165   46700 retry.go:31] will retry after 7.717398196s: kubelet not initialised
	I1205 20:52:28.335417   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:28.335446   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:28.335457   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.429478   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.429507   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:28.929996   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.936475   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.936525   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.430308   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.437787   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:29.437838   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.930326   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.942625   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:52:29.953842   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:29.953875   47365 api_server.go:131] duration metric: took 5.810704359s to wait for apiserver health ...
	I1205 20:52:29.953889   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:29.953904   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:29.955505   47365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:27.326223   46374 crio.go:444] Took 1.852284 seconds to copy over tarball
	I1205 20:52:27.326333   46374 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:27.374784   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:29.378733   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:30.375181   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:30.375266   46866 pod_ready.go:81] duration metric: took 5.021883955s waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.375316   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:29.956914   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:29.981391   47365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:30.016634   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:30.030957   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:30.031030   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:30.031047   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:30.031069   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:30.031088   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:30.031117   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:30.031135   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:30.031148   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:30.031165   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:30.031177   47365 system_pods.go:74] duration metric: took 14.513879ms to wait for pod list to return data ...
	I1205 20:52:30.031190   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:30.035458   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:30.035493   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:30.035506   47365 node_conditions.go:105] duration metric: took 4.295594ms to run NodePressure ...
	I1205 20:52:30.035525   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:30.302125   47365 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307852   47365 kubeadm.go:787] kubelet initialised
	I1205 20:52:30.307875   47365 kubeadm.go:788] duration metric: took 5.724991ms waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307883   47365 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:30.316621   47365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.323682   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323716   47365 pod_ready.go:81] duration metric: took 7.060042ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.323728   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323736   47365 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.338909   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338945   47365 pod_ready.go:81] duration metric: took 15.198541ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.338967   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338977   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.349461   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349491   47365 pod_ready.go:81] duration metric: took 10.504515ms waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.349505   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349513   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.422520   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422553   47365 pod_ready.go:81] duration metric: took 73.030993ms waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.422569   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422588   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.212527   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212553   47365 pod_ready.go:81] duration metric: took 789.956497ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.212564   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212575   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.727110   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727140   47365 pod_ready.go:81] duration metric: took 514.553589ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.727154   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727162   47365 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.168658   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168695   47365 pod_ready.go:81] duration metric: took 441.52358ms waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:32.168711   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168720   47365 pod_ready.go:38] duration metric: took 1.860826751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:32.168747   47365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:52:32.182053   47365 ops.go:34] apiserver oom_adj: -16
	I1205 20:52:32.182075   47365 kubeadm.go:640] restartCluster took 22.440428452s
	I1205 20:52:32.182083   47365 kubeadm.go:406] StartCluster complete in 22.493245354s
	I1205 20:52:32.182130   47365 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.182208   47365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:52:32.184035   47365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.290773   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:52:32.290931   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:32.290921   47365 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:52:32.291055   47365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291079   47365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291088   47365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291099   47365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-463614"
	I1205 20:52:32.291123   47365 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291133   47365 addons.go:240] addon metrics-server should already be in state true
	I1205 20:52:32.291177   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291093   47365 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291220   47365 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:52:32.291298   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291586   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291607   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291633   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291635   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291713   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291739   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.311298   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I1205 20:52:32.311514   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I1205 20:52:32.311541   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1205 20:52:32.311733   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.311932   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312026   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312291   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312325   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312434   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312456   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312487   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312501   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312688   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312763   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312833   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.313276   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313300   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.313359   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313390   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.316473   47365 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.316493   47365 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:52:32.316520   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.317093   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.317125   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.328598   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I1205 20:52:32.329097   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.329225   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I1205 20:52:32.329589   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.329608   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.329674   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.330230   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.330248   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.330298   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330484   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330553   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330719   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330908   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I1205 20:52:32.331201   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.331935   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.331953   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.332351   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.332472   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.332653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.512055   47365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:52:32.333098   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.511993   47365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:52:32.536814   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:52:32.512201   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.536942   47365 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.536958   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:52:32.536985   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.536843   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:52:32.537043   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.541412   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541780   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541924   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.541958   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542190   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542369   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.542394   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542434   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.542641   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.542748   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542905   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.542939   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.543088   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.543246   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.554014   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I1205 20:52:32.554513   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.554975   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.555007   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.555387   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.555634   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.557606   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.557895   47365 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.557911   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:52:32.557936   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.561075   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.561553   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.561942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.562135   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.562338   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.673513   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.682442   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:52:32.682472   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:52:32.706007   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.726379   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:52:32.726413   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:52:32.779247   47365 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:52:32.780175   47365 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-463614" context rescaled to 1 replicas
	I1205 20:52:32.780220   47365 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:52:32.787518   47365 out.go:177] * Verifying Kubernetes components...
	I1205 20:52:32.790046   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:32.796219   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:32.796248   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:52:32.854438   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:34.594203   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.920648219s)
	I1205 20:52:34.594267   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594294   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.888240954s)
	I1205 20:52:34.594331   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594343   47365 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.80425984s)
	I1205 20:52:34.594373   47365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:34.594350   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594710   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594729   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.594755   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594772   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594783   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594801   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594754   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594860   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.595134   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595195   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595229   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595238   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.595356   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595375   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.610358   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.610390   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.610651   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.610677   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689242   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.834763203s)
	I1205 20:52:34.689294   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689309   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.689648   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.689698   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.689717   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689740   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.690020   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.690025   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.690035   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.690046   47365 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-463614"
	I1205 20:52:34.692072   47365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 20:52:30.639619   46374 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.313251826s)
	I1205 20:52:30.641314   46374 crio.go:451] Took 3.315054 seconds to extract the tarball
	I1205 20:52:30.641328   46374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:30.687076   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:30.745580   46374 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:30.745603   46374 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:30.745681   46374 ssh_runner.go:195] Run: crio config
	I1205 20:52:30.807631   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:30.807656   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:30.807674   46374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:30.807692   46374 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-331495 NodeName:embed-certs-331495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:30.807828   46374 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-331495"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:30.807897   46374 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-331495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:30.807958   46374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:30.820571   46374 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:30.820679   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:30.831881   46374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:52:30.852058   46374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:30.870516   46374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1205 20:52:30.888000   46374 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:30.892529   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:30.904910   46374 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495 for IP: 192.168.72.180
	I1205 20:52:30.904950   46374 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:30.905143   46374 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:30.905197   46374 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:30.905280   46374 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/client.key
	I1205 20:52:30.905336   46374 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key.379caec1
	I1205 20:52:30.905368   46374 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key
	I1205 20:52:30.905463   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:30.905489   46374 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:30.905499   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:30.905525   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:30.905550   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:30.905572   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:30.905609   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:30.906129   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:30.930322   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:30.953120   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:30.976792   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:31.000462   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:31.025329   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:31.050451   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:31.075644   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:31.101693   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:31.125712   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:31.149721   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:31.173466   46374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:31.191836   46374 ssh_runner.go:195] Run: openssl version
	I1205 20:52:31.197909   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:31.212206   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219081   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219155   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.225423   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:31.239490   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:31.251505   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256613   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256678   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.262730   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:31.274879   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:31.286201   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291593   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291658   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.298904   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:31.310560   46374 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:31.315670   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:31.322461   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:31.328590   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:31.334580   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:31.341827   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:31.348456   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:31.354835   46374 kubeadm.go:404] StartCluster: {Name:embed-certs-331495 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:31.354945   46374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:31.355024   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:31.396272   46374 cri.go:89] found id: ""
	I1205 20:52:31.396346   46374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:31.406603   46374 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:31.406629   46374 kubeadm.go:636] restartCluster start
	I1205 20:52:31.406683   46374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:31.417671   46374 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.419068   46374 kubeconfig.go:92] found "embed-certs-331495" server: "https://192.168.72.180:8443"
	I1205 20:52:31.421304   46374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:31.432188   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.432260   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.445105   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.445132   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.445182   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.457857   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.958205   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.958322   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.972477   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.458645   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.458732   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.475471   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.958778   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.958872   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.973340   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.458838   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.458924   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.475090   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.958680   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.958776   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.974789   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.458297   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.458371   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.471437   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.958961   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.959030   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.972007   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:35.458648   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.458729   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.471573   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.362684   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.362706   46866 pod_ready.go:81] duration metric: took 1.98737949s waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.362715   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368694   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.368717   46866 pod_ready.go:81] duration metric: took 5.993796ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368726   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375418   46866 pod_ready.go:92] pod "kube-proxy-mltvl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.375442   46866 pod_ready.go:81] duration metric: took 6.709035ms waiting for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375452   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383393   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.383418   46866 pod_ready.go:81] duration metric: took 7.957397ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383430   46866 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:34.497914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:34.693693   47365 addons.go:502] enable addons completed in 2.40279745s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 20:52:35.331317   46700 retry.go:31] will retry after 13.122920853s: kubelet not initialised
	I1205 20:52:35.958930   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.959020   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.971607   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.458135   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.458202   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.475097   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.958621   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.958703   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.974599   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.458670   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.458790   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.472296   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.958470   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.958561   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.971241   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.458862   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.458957   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.471475   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.958727   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.958807   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.971366   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.458991   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.459084   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.471352   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.958955   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.959052   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.972803   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:40.458181   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.458251   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.470708   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.499335   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:38.996779   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:36.611450   47365 node_ready.go:58] node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:39.111234   47365 node_ready.go:49] node "default-k8s-diff-port-463614" has status "Ready":"True"
	I1205 20:52:39.111266   47365 node_ready.go:38] duration metric: took 4.51686489s waiting for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:39.111278   47365 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:39.117815   47365 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124431   47365 pod_ready.go:92] pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.124455   47365 pod_ready.go:81] duration metric: took 6.615213ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124464   47365 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131301   47365 pod_ready.go:92] pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.131340   47365 pod_ready.go:81] duration metric: took 6.85604ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131352   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:41.155265   47365 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:40.958830   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.958921   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.970510   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:41.432806   46374 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:41.432840   46374 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:41.432854   46374 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:41.432909   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:41.476486   46374 cri.go:89] found id: ""
	I1205 20:52:41.476550   46374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:41.493676   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:41.503594   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:41.503681   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512522   46374 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512550   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:41.645081   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.368430   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.586289   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.657555   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.753020   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:42.753103   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:42.767926   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.286111   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.786148   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.285601   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.785638   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.285508   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.326812   46374 api_server.go:72] duration metric: took 2.573794156s to wait for apiserver process to appear ...
	I1205 20:52:45.326839   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:45.326857   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327337   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:45.327367   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327771   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:40.998702   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:43.508882   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:42.152898   47365 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:42.152926   47365 pod_ready.go:81] duration metric: took 3.021552509s waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:42.152939   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320531   47365 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.320632   47365 pod_ready.go:81] duration metric: took 1.167680941s waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320660   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521255   47365 pod_ready.go:92] pod "kube-proxy-g4zct" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.521286   47365 pod_ready.go:81] duration metric: took 200.606753ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521300   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911946   47365 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.911972   47365 pod_ready.go:81] duration metric: took 390.664131ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911983   47365 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:46.220630   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.459426   46700 kubeadm.go:787] kubelet initialised
	I1205 20:52:48.459452   46700 kubeadm.go:788] duration metric: took 53.977281861s waiting for restarted kubelet to initialise ...
	I1205 20:52:48.459460   46700 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:48.465332   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471155   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.471184   46700 pod_ready.go:81] duration metric: took 5.815983ms waiting for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471195   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476833   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.476861   46700 pod_ready.go:81] duration metric: took 5.658311ms waiting for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476876   46700 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481189   46700 pod_ready.go:92] pod "etcd-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.481217   46700 pod_ready.go:81] duration metric: took 4.332284ms waiting for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481230   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485852   46700 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.485869   46700 pod_ready.go:81] duration metric: took 4.630813ms waiting for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485879   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:45.828213   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.185115   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.185143   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.185156   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.228977   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.229017   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.328278   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.336930   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.336971   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:49.828530   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.835188   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.835215   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:50.328834   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.337852   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:50.337885   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:45.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:47.998466   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.497317   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.828313   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.835050   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:52:50.844093   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:50.844124   46374 api_server.go:131] duration metric: took 5.517278039s to wait for apiserver health ...
	I1205 20:52:50.844134   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:50.844141   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:50.846047   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:48.220942   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.720446   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.858954   46700 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.858980   46700 pod_ready.go:81] duration metric: took 373.093905ms waiting for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.858989   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260468   46700 pod_ready.go:92] pod "kube-proxy-r5n6g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.260493   46700 pod_ready.go:81] duration metric: took 401.497792ms waiting for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260501   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658952   46700 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.658977   46700 pod_ready.go:81] duration metric: took 398.469864ms waiting for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658986   46700 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:51.966947   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.848285   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:50.865469   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:50.918755   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:50.951671   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:50.951705   46374 system_pods.go:61] "coredns-5dd5756b68-7xr6w" [8300dbf8-413a-4171-9e56-53f0f2d03fd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:50.951712   46374 system_pods.go:61] "etcd-embed-certs-331495" [b2802bcb-262e-4d2a-9589-b1b3885de515] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:50.951722   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [6f9a28a7-8827-4071-8c68-f2671e7a8017] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:50.951738   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [24e85887-7f58-4a5c-b0d4-4eebd6076a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:50.951744   46374 system_pods.go:61] "kube-proxy-76qq2" [ffd744ec-9522-443c-b609-b11e24ab9b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:50.951750   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [aaa502dc-a7cf-4f76-b79f-aa8be1ae48f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:50.951756   46374 system_pods.go:61] "metrics-server-57f55c9bc5-bcg28" [e60503c2-732d-44a3-b5da-fbf7a0cfd981] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:50.951761   46374 system_pods.go:61] "storage-provisioner" [be1aa61b-82e9-4382-ab1c-89e30b801fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:50.951767   46374 system_pods.go:74] duration metric: took 32.973877ms to wait for pod list to return data ...
	I1205 20:52:50.951773   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:50.971413   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:50.971440   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:50.971449   46374 node_conditions.go:105] duration metric: took 19.672668ms to run NodePressure ...
	I1205 20:52:50.971465   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:51.378211   46374 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383462   46374 kubeadm.go:787] kubelet initialised
	I1205 20:52:51.383487   46374 kubeadm.go:788] duration metric: took 5.246601ms waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383495   46374 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:51.393558   46374 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:53.414801   46374 pod_ready.go:102] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.426681   46374 pod_ready.go:92] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:55.426710   46374 pod_ready.go:81] duration metric: took 4.033124274s waiting for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:55.426725   46374 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:52.498509   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.997539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:53.221825   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.723682   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.468896   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:56.966471   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.468158   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.469797   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.497582   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.500937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.727756   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.727968   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.466541   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469387   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469996   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.968435   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.969033   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.969065   46374 pod_ready.go:81] duration metric: took 9.542324599s waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.969073   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975019   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.975041   46374 pod_ready.go:81] duration metric: took 5.961268ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975049   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980743   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.980771   46374 pod_ready.go:81] duration metric: took 5.713974ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980779   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985565   46374 pod_ready.go:92] pod "kube-proxy-76qq2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.985596   46374 pod_ready.go:81] duration metric: took 4.805427ms waiting for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985610   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992009   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.992035   46374 pod_ready.go:81] duration metric: took 6.416324ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992047   46374 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:01.996877   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.997311   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:02.221319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.720314   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.966830   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.465943   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:07.272848   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.272897   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:05.997810   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.497408   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.722608   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.222226   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.965894   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.967253   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.466458   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.773608   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.773778   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.997547   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:12.999476   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.496736   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.721128   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.721371   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.221780   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.466602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.965160   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.272951   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.772527   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.497284   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.498006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.223073   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.724402   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.966424   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.466866   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.772789   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.273369   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:21.997270   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.496150   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:23.221999   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.223587   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.967755   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.465568   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.772596   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:30.273464   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:26.496470   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.003099   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.721654   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.724134   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.466332   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.966465   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.773521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:35.272236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.497006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.000663   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.221725   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.719806   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.466035   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.966501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:37.773436   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.274255   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.496949   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.996265   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.721339   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.723854   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.221087   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:39.465585   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.465785   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.467239   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:42.773263   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:44.773717   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.998588   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.496904   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.497783   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.222148   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.722122   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.966317   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.966572   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.272412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:49.273057   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.997444   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.496708   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.722350   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.219843   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.467523   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.967357   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:51.773424   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:53.775574   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.499839   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.997448   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.222442   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.719693   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:55.466751   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:57.966602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.271805   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.272923   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.273306   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.998244   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:59.498440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.720684   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.729688   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.220861   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.466162   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.966846   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.773903   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.271747   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.995748   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:04.002522   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:03.723212   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.224289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.465907   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.466264   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.272960   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.274281   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.497442   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.997440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.721146   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:10.724743   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.966368   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.966796   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.772305   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.772470   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.496229   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.497913   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.221912   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.722076   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:14.467708   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:16.965932   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.773481   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:17.774552   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.273733   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.998027   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.496453   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.497053   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.223289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.722234   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.966869   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:21.465921   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:23.466328   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.272550   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.497084   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:24.498177   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.727882   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.221485   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.966388   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.466553   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.772616   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.773188   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:26.997209   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.997776   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.721711   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.722528   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:30.964854   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.966383   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.272612   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.275600   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:31.498601   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:33.997450   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.220641   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.222232   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.476491   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.968512   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.772248   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.272991   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.997574   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.999016   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.501116   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.723179   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.220182   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.469607   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.968860   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.274044   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.502208   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:44.997516   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.720811   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.721757   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.725689   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.466766   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.966704   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.773511   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.273161   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.274031   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.497342   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:49.502501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.223549   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.719890   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.465849   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.466157   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.772748   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:55.272781   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:51.997636   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.499333   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.720512   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.721826   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.466519   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.466580   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.274370   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.774179   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.997654   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.497915   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.221713   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.723015   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:58.965289   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:00.966027   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.967557   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.273349   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.773101   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.996491   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:03.996649   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.723123   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.220986   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.224736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.466592   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.966611   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.773180   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.774008   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.997589   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.998076   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.001226   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.720517   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.221172   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.466096   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.467200   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.272981   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.773210   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.496043   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.497518   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.725751   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.219939   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.966795   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:17.466501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.272578   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.273500   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.997861   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.499434   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.221058   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.720978   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.466641   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.965389   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.772109   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.274633   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.997800   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:24.497501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.220292   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.722738   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.966366   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.966799   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.465341   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.773108   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:27.774236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.274971   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:26.997610   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.997753   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.220185   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.220399   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.466026   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.966220   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.772859   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:35.272898   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:31.497899   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:33.500772   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.220696   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.221098   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.222701   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.966787   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.465676   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.775190   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.272006   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.000539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.497044   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.720509   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.730400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:39.468063   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:41.966415   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:42.276412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.772916   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.996937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.496928   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.220575   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.724283   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.465646   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.467000   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.773090   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.273675   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.997477   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:47.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.998126   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.220758   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:50.720911   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.966711   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.468554   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.773277   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:52.501489   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:54.996998   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.221047   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.221493   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.965841   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.965891   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.465977   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.272446   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.772269   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.997565   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.496443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:57.722571   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.724736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.466069   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.966747   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.772715   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.271368   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.274084   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:01.498102   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.498428   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.220645   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.720012   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.966850   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.467719   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.772997   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.273279   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.998642   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:08.001018   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.496939   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:06.721938   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.219709   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:11.220579   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.968249   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.465039   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.773538   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.272696   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.500855   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.996837   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:13.725252   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.725522   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.465989   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:16.966908   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.273749   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.772650   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.496107   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.496914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:18.224365   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:20.720429   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.465513   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.967092   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.775353   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:24.277586   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.498047   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.999733   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.219319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:25.222340   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.967374   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.465973   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.468481   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.772514   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.774642   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.496794   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.498446   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:27.723499   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.222748   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.965650   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.967183   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.777450   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:33.276381   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.999443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.384081   46866 pod_ready.go:81] duration metric: took 4m0.000635015s waiting for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:32.384115   46866 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:32.384132   46866 pod_ready.go:38] duration metric: took 4m11.062812404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:32.384156   46866 kubeadm.go:640] restartCluster took 4m30.437260197s
	W1205 20:56:32.384250   46866 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:32.384280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:32.721610   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.220186   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.467452   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.966451   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.773516   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.773737   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.273185   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.221794   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:39.722400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.466005   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.467531   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.773790   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:45.272396   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:41.722481   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.734080   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.912982   47365 pod_ready.go:81] duration metric: took 4m0.000982583s waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:43.913024   47365 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:43.913038   47365 pod_ready.go:38] duration metric: took 4m4.801748698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:43.913063   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:56:43.913101   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:43.913175   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:43.965196   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:43.965220   47365 cri.go:89] found id: ""
	I1205 20:56:43.965228   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:43.965272   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:43.970257   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:43.970353   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:44.026974   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.027005   47365 cri.go:89] found id: ""
	I1205 20:56:44.027015   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:44.027099   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.032107   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:44.032212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:44.075721   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:44.075758   47365 cri.go:89] found id: ""
	I1205 20:56:44.075766   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:44.075823   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.082125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:44.082212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:44.125099   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:44.125122   47365 cri.go:89] found id: ""
	I1205 20:56:44.125129   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:44.125171   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.129477   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:44.129538   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:44.180281   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.180305   47365 cri.go:89] found id: ""
	I1205 20:56:44.180313   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:44.180357   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.185094   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:44.185173   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:44.228693   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.228719   47365 cri.go:89] found id: ""
	I1205 20:56:44.228730   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:44.228786   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.233574   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:44.233687   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:44.279286   47365 cri.go:89] found id: ""
	I1205 20:56:44.279312   47365 logs.go:284] 0 containers: []
	W1205 20:56:44.279321   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:44.279328   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:44.279390   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:44.333572   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.333598   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:44.333605   47365 cri.go:89] found id: ""
	I1205 20:56:44.333614   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:44.333678   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.339080   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.343653   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:44.343687   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.412744   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:44.412785   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.457374   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:44.457402   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.521640   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:44.521676   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:44.536612   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:44.536636   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.586795   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:44.586836   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:45.065254   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:45.065293   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:45.126209   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:45.126242   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:45.166553   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:45.166580   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:45.214849   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:45.214887   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:45.371687   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:45.371732   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:45.417585   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:45.417615   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:45.455524   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:45.455559   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:44.965462   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.967433   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:47.272958   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.274398   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.621173   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.236869123s)
	I1205 20:56:46.621264   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:46.636086   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:46.647003   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:46.657201   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:46.657241   46866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:56:46.882231   46866 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:48.007463   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:56:48.023675   47365 api_server.go:72] duration metric: took 4m15.243410399s to wait for apiserver process to appear ...
	I1205 20:56:48.023713   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:56:48.023748   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:48.023818   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:48.067278   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.067301   47365 cri.go:89] found id: ""
	I1205 20:56:48.067308   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:48.067359   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.072370   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:48.072446   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:48.118421   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:48.118444   47365 cri.go:89] found id: ""
	I1205 20:56:48.118453   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:48.118509   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.123954   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:48.124019   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:48.173864   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:48.173890   47365 cri.go:89] found id: ""
	I1205 20:56:48.173900   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:48.173955   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.178717   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:48.178790   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:48.221891   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:48.221915   47365 cri.go:89] found id: ""
	I1205 20:56:48.221924   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:48.221985   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.226811   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:48.226886   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:48.271431   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:48.271454   47365 cri.go:89] found id: ""
	I1205 20:56:48.271463   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:48.271518   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.276572   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:48.276655   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:48.326438   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:48.326466   47365 cri.go:89] found id: ""
	I1205 20:56:48.326476   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:48.326534   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.334539   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:48.334611   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:48.377929   47365 cri.go:89] found id: ""
	I1205 20:56:48.377955   47365 logs.go:284] 0 containers: []
	W1205 20:56:48.377965   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:48.377973   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:48.378035   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:48.430599   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:48.430621   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:48.430629   47365 cri.go:89] found id: ""
	I1205 20:56:48.430638   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:48.430691   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.434882   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.439269   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:48.439299   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.495069   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:48.495113   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:48.955220   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:48.955257   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:48.971222   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:48.971246   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:49.108437   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:49.108470   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:49.150916   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:49.150940   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:49.207092   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:49.207141   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:49.251940   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:49.251969   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:49.293885   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:49.293918   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:49.349151   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:49.349187   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:49.403042   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:49.403079   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:49.466816   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:49.466858   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:49.525300   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:49.525341   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:49.467873   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.659950   46700 pod_ready.go:81] duration metric: took 4m0.000950283s waiting for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:49.659985   46700 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:49.660008   46700 pod_ready.go:38] duration metric: took 4m1.200539602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:49.660056   46700 kubeadm.go:640] restartCluster took 5m17.548124184s
	W1205 20:56:49.660130   46700 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:49.660162   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:51.776117   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:54.275521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:52.099610   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:56:52.106838   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:56:52.109813   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:56:52.109835   47365 api_server.go:131] duration metric: took 4.086114093s to wait for apiserver health ...
	I1205 20:56:52.109845   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:56:52.109874   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:52.109929   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:52.155290   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:52.155319   47365 cri.go:89] found id: ""
	I1205 20:56:52.155328   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:52.155382   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.160069   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:52.160137   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:52.197857   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.197885   47365 cri.go:89] found id: ""
	I1205 20:56:52.197894   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:52.197956   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.203012   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:52.203075   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:52.257881   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.257904   47365 cri.go:89] found id: ""
	I1205 20:56:52.257914   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:52.257972   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.264817   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:52.264899   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:52.313302   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.313331   47365 cri.go:89] found id: ""
	I1205 20:56:52.313341   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:52.313398   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.318864   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:52.318972   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:52.389306   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.389333   47365 cri.go:89] found id: ""
	I1205 20:56:52.389342   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:52.389400   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.406125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:52.406194   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:52.458735   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:52.458760   47365 cri.go:89] found id: ""
	I1205 20:56:52.458770   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:52.458821   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.463571   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:52.463642   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:52.529035   47365 cri.go:89] found id: ""
	I1205 20:56:52.529067   47365 logs.go:284] 0 containers: []
	W1205 20:56:52.529079   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:52.529088   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:52.529157   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:52.583543   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:52.583578   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.583585   47365 cri.go:89] found id: ""
	I1205 20:56:52.583594   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:52.583649   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.589299   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.595000   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:52.595024   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.671447   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:52.671487   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.719185   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:52.719223   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:52.780173   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:52.780203   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.823808   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:52.823843   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.874394   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:52.874428   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:52.938139   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:52.938177   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.982386   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:52.982414   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:53.029082   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:53.029111   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:53.447057   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:53.447099   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:53.465029   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:53.465066   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:53.627351   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:53.627400   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:53.694357   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:53.694393   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:56.267579   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:56:56.267614   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.267624   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.267631   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.267638   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.267644   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.267650   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.267660   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.267672   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.267683   47365 system_pods.go:74] duration metric: took 4.157830691s to wait for pod list to return data ...
	I1205 20:56:56.267696   47365 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:56:56.271148   47365 default_sa.go:45] found service account: "default"
	I1205 20:56:56.271170   47365 default_sa.go:55] duration metric: took 3.468435ms for default service account to be created ...
	I1205 20:56:56.271176   47365 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:56:56.277630   47365 system_pods.go:86] 8 kube-system pods found
	I1205 20:56:56.277654   47365 system_pods.go:89] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.277660   47365 system_pods.go:89] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.277665   47365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.277669   47365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.277674   47365 system_pods.go:89] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.277679   47365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.277688   47365 system_pods.go:89] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.277696   47365 system_pods.go:89] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.277715   47365 system_pods.go:126] duration metric: took 6.533492ms to wait for k8s-apps to be running ...
	I1205 20:56:56.277726   47365 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:56:56.277772   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:56.296846   47365 system_svc.go:56] duration metric: took 19.109991ms WaitForService to wait for kubelet.
	I1205 20:56:56.296877   47365 kubeadm.go:581] duration metric: took 4m23.516618576s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:56:56.296902   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:56:56.301504   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:56:56.301530   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:56:56.301542   47365 node_conditions.go:105] duration metric: took 4.634882ms to run NodePressure ...
	I1205 20:56:56.301552   47365 start.go:228] waiting for startup goroutines ...
	I1205 20:56:56.301560   47365 start.go:233] waiting for cluster config update ...
	I1205 20:56:56.301573   47365 start.go:242] writing updated cluster config ...
	I1205 20:56:56.301859   47365 ssh_runner.go:195] Run: rm -f paused
	I1205 20:56:56.357189   47365 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:56:56.358798   47365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-463614" cluster and "default" namespace by default
	I1205 20:56:54.756702   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096505481s)
	I1205 20:56:54.756786   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:54.774684   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:54.786308   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:54.796762   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:54.796809   46700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1205 20:56:55.081318   46700 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:58.569752   46866 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1205 20:56:58.569873   46866 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:56:58.569988   46866 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:56:58.570119   46866 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:56:58.570261   46866 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:56:58.570368   46866 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:56:58.572785   46866 out.go:204]   - Generating certificates and keys ...
	I1205 20:56:58.573020   46866 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:56:58.573232   46866 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:56:58.573410   46866 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:56:58.573510   46866 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:56:58.573717   46866 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:56:58.573868   46866 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:56:58.574057   46866 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:56:58.574229   46866 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:56:58.574517   46866 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:56:58.574760   46866 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:56:58.574903   46866 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:56:58.575070   46866 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:56:58.575205   46866 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:56:58.575363   46866 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:56:58.575515   46866 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:56:58.575600   46866 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:56:58.575799   46866 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:56:58.576083   46866 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:56:58.576320   46866 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:56:58.580654   46866 out.go:204]   - Booting up control plane ...
	I1205 20:56:58.581337   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:56:58.581851   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:56:58.582029   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:56:58.582667   46866 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:56:58.582988   46866 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:56:58.583126   46866 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:56:58.583631   46866 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:56:58.583908   46866 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502137 seconds
	I1205 20:56:58.584157   46866 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:56:58.584637   46866 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:56:58.584882   46866 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:56:58.585370   46866 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:56:58.585492   46866 kubeadm.go:322] [bootstrap-token] Using token: fap3k3.pr3uz4d90n7oyvds
	I1205 20:56:58.590063   46866 out.go:204]   - Configuring RBAC rules ...
	I1205 20:56:58.590356   46866 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:56:58.590482   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:56:58.590692   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:56:58.590887   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:56:58.591031   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:56:58.591131   46866 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:56:58.591269   46866 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:56:58.591323   46866 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:56:58.591378   46866 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:56:58.591383   46866 kubeadm.go:322] 
	I1205 20:56:58.591455   46866 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:56:58.591462   46866 kubeadm.go:322] 
	I1205 20:56:58.591554   46866 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:56:58.591559   46866 kubeadm.go:322] 
	I1205 20:56:58.591590   46866 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:56:58.591659   46866 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:56:58.591719   46866 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:56:58.591724   46866 kubeadm.go:322] 
	I1205 20:56:58.591787   46866 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:56:58.591793   46866 kubeadm.go:322] 
	I1205 20:56:58.591848   46866 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:56:58.591853   46866 kubeadm.go:322] 
	I1205 20:56:58.591914   46866 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:56:58.592015   46866 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:56:58.592093   46866 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:56:58.592099   46866 kubeadm.go:322] 
	I1205 20:56:58.592197   46866 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:56:58.592300   46866 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:56:58.592306   46866 kubeadm.go:322] 
	I1205 20:56:58.592403   46866 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592525   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:56:58.592550   46866 kubeadm.go:322] 	--control-plane 
	I1205 20:56:58.592558   46866 kubeadm.go:322] 
	I1205 20:56:58.592645   46866 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:56:58.592650   46866 kubeadm.go:322] 
	I1205 20:56:58.592743   46866 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592870   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:56:58.592880   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:56:58.592889   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:56:58.594456   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:56:56.773764   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.778395   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.595862   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:56:58.625177   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:56:58.683896   46866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:56:58.683977   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.684060   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=no-preload-143651 minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.741242   46866 ops.go:34] apiserver oom_adj: -16
	I1205 20:56:59.114129   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.238212   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.869086   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:00.368538   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.272299   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:03.272604   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:04.992619   46374 pod_ready.go:81] duration metric: took 4m0.000553964s waiting for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:04.992652   46374 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:57:04.992691   46374 pod_ready.go:38] duration metric: took 4m13.609186276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:04.992726   46374 kubeadm.go:640] restartCluster took 4m33.586092425s
	W1205 20:57:04.992782   46374 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:57:04.992808   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:57:00.868500   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.369084   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.368409   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.869341   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.368765   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.869054   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.368855   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.869144   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:05.368635   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.047040   46700 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1205 20:57:09.047132   46700 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:09.047236   46700 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:09.047350   46700 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:09.047462   46700 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:09.047583   46700 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:09.047693   46700 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:09.047752   46700 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1205 20:57:09.047825   46700 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:09.049606   46700 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:09.049706   46700 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:09.049802   46700 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:09.049885   46700 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:09.049963   46700 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:09.050058   46700 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:09.050148   46700 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:09.050235   46700 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:09.050350   46700 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:09.050468   46700 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:09.050563   46700 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:09.050627   46700 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:09.050732   46700 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:09.050817   46700 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:09.050897   46700 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:09.050997   46700 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:09.051080   46700 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:09.051165   46700 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:09.052610   46700 out.go:204]   - Booting up control plane ...
	I1205 20:57:09.052722   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:09.052806   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:09.052870   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:09.052965   46700 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:09.053103   46700 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:09.053203   46700 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005642 seconds
	I1205 20:57:09.053354   46700 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:09.053514   46700 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:09.053563   46700 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:09.053701   46700 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-061206 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 20:57:09.053783   46700 kubeadm.go:322] [bootstrap-token] Using token: syik3l.i77juzhd1iybx3my
	I1205 20:57:09.055286   46700 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:09.055409   46700 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:09.055599   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:09.055749   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:09.055862   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:09.055982   46700 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:09.056043   46700 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:09.056106   46700 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:09.056116   46700 kubeadm.go:322] 
	I1205 20:57:09.056197   46700 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:09.056207   46700 kubeadm.go:322] 
	I1205 20:57:09.056307   46700 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:09.056329   46700 kubeadm.go:322] 
	I1205 20:57:09.056377   46700 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:09.056456   46700 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:09.056533   46700 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:09.056540   46700 kubeadm.go:322] 
	I1205 20:57:09.056600   46700 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:09.056669   46700 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:09.056729   46700 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:09.056737   46700 kubeadm.go:322] 
	I1205 20:57:09.056804   46700 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1205 20:57:09.056868   46700 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:09.056874   46700 kubeadm.go:322] 
	I1205 20:57:09.056944   46700 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057093   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:09.057135   46700 kubeadm.go:322]     --control-plane 	  
	I1205 20:57:09.057150   46700 kubeadm.go:322] 
	I1205 20:57:09.057252   46700 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:09.057260   46700 kubeadm.go:322] 
	I1205 20:57:09.057360   46700 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057502   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:09.057514   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:57:09.057520   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:09.058762   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:05.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.368434   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.869228   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.369175   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.868933   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.369028   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.868920   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.369223   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.869130   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.369240   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.869318   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.369189   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.576975   46866 kubeadm.go:1088] duration metric: took 12.893071134s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:11.577015   46866 kubeadm.go:406] StartCluster complete in 5m9.690903424s
	I1205 20:57:11.577039   46866 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.577129   46866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:11.579783   46866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.580131   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:11.580364   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:57:11.580360   46866 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:11.580446   46866 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143651"
	I1205 20:57:11.580467   46866 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143651"
	W1205 20:57:11.580479   46866 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:11.580518   46866 addons.go:69] Setting metrics-server=true in profile "no-preload-143651"
	I1205 20:57:11.580535   46866 addons.go:231] Setting addon metrics-server=true in "no-preload-143651"
	W1205 20:57:11.580544   46866 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:11.580575   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580583   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580982   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580994   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580497   46866 addons.go:69] Setting default-storageclass=true in profile "no-preload-143651"
	I1205 20:57:11.581018   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581027   46866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143651"
	I1205 20:57:11.581303   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581357   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.581383   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.600887   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1205 20:57:11.600886   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I1205 20:57:11.601552   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601681   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601760   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I1205 20:57:11.602152   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602177   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602260   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.602348   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602370   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602603   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602719   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602806   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.602996   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.603020   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.603329   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.603379   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.603477   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.603997   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.604040   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.606962   46866 addons.go:231] Setting addon default-storageclass=true in "no-preload-143651"
	W1205 20:57:11.606986   46866 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:11.607009   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.607331   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.607363   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.624885   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I1205 20:57:11.625358   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.625857   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.625869   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.626331   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.626627   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I1205 20:57:11.626832   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.627179   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.631282   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I1205 20:57:11.632431   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.632516   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.632599   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.632763   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.633113   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.633639   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.633883   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.634495   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.634539   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.634823   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.637060   46866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:11.635196   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.641902   46866 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:11.641932   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:11.641960   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.642616   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.644862   46866 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:11.647090   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:11.647113   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:11.647134   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.646852   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647539   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.647564   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647755   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.648063   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.648295   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.648520   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.654458   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.654493   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654522   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.654556   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654801   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.655015   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.655247   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.661244   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I1205 20:57:11.661886   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.662508   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.662534   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.663651   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.663907   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.666067   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.666501   46866 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.666523   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:11.666543   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.669659   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670106   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.670132   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670479   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.670673   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.670802   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.670915   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.816687   46866 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143651" context rescaled to 1 replicas
	I1205 20:57:11.816742   46866 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:11.820014   46866 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:09.060305   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:09.069861   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:09.093691   46700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:09.093847   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.093914   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=old-k8s-version-061206 minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.123857   46700 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:09.315555   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.435904   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.049845   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.549703   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.049931   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.549848   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.049776   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.549841   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.050053   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.549531   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.821903   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:11.831116   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:11.867528   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.969463   46866 node_ready.go:35] waiting up to 6m0s for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:11.976207   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:11.976235   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:11.977230   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:12.003110   46866 node_ready.go:49] node "no-preload-143651" has status "Ready":"True"
	I1205 20:57:12.003132   46866 node_ready.go:38] duration metric: took 33.629273ms waiting for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:12.003142   46866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:12.053173   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:12.053208   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:12.140411   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:12.170492   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.170521   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:12.251096   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.778963   46866 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:12.779026   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779040   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779377   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779402   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.779411   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779411   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.779418   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779625   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779665   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.786021   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.786045   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.786331   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.786380   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.786400   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194477   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217217088s)
	I1205 20:57:13.194529   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194543   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.194883   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.194929   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.194948   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194960   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194970   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.195198   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.195212   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562441   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311301688s)
	I1205 20:57:13.562496   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562512   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.562826   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.562845   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562856   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562865   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.563115   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.563164   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.563177   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.563190   46866 addons.go:467] Verifying addon metrics-server=true in "no-preload-143651"
	I1205 20:57:13.564940   46866 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:13.566316   46866 addons.go:502] enable addons completed in 1.985974766s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:14.389400   46866 pod_ready.go:102] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:15.388445   46866 pod_ready.go:92] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.388478   46866 pod_ready.go:81] duration metric: took 3.248030471s waiting for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.388493   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.391728   46866 pod_ready.go:97] error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391759   46866 pod_ready.go:81] duration metric: took 3.251498ms waiting for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:15.391772   46866 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391781   46866 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399725   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.399745   46866 pod_ready.go:81] duration metric: took 7.956804ms waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399759   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407412   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.407436   46866 pod_ready.go:81] duration metric: took 7.672123ms waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407446   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414249   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.414295   46866 pod_ready.go:81] duration metric: took 6.840313ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414309   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587237   46866 pod_ready.go:92] pod "kube-proxy-6txsz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.587271   46866 pod_ready.go:81] duration metric: took 172.95478ms waiting for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587286   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985901   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.985930   46866 pod_ready.go:81] duration metric: took 398.634222ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985943   46866 pod_ready.go:38] duration metric: took 3.982790764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:15.985960   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:15.986019   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:16.009052   46866 api_server.go:72] duration metric: took 4.192253908s to wait for apiserver process to appear ...
	I1205 20:57:16.009082   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:16.009100   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:57:16.014689   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:57:16.015758   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:57:16.015781   46866 api_server.go:131] duration metric: took 6.691652ms to wait for apiserver health ...
	I1205 20:57:16.015791   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:16.188198   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:16.188232   46866 system_pods.go:61] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.188240   46866 system_pods.go:61] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.188246   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.188254   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.188261   46866 system_pods.go:61] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.188267   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.188279   46866 system_pods.go:61] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.188290   46866 system_pods.go:61] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.188301   46866 system_pods.go:74] duration metric: took 172.503422ms to wait for pod list to return data ...
	I1205 20:57:16.188311   46866 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:16.384722   46866 default_sa.go:45] found service account: "default"
	I1205 20:57:16.384759   46866 default_sa.go:55] duration metric: took 196.435091ms for default service account to be created ...
	I1205 20:57:16.384769   46866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:16.587515   46866 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:16.587542   46866 system_pods.go:89] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.587547   46866 system_pods.go:89] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.587554   46866 system_pods.go:89] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.587561   46866 system_pods.go:89] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.587567   46866 system_pods.go:89] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.587574   46866 system_pods.go:89] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.587585   46866 system_pods.go:89] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.587593   46866 system_pods.go:89] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.587604   46866 system_pods.go:126] duration metric: took 202.829744ms to wait for k8s-apps to be running ...
	I1205 20:57:16.587613   46866 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:16.587654   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:16.602489   46866 system_svc.go:56] duration metric: took 14.864421ms WaitForService to wait for kubelet.
	I1205 20:57:16.602521   46866 kubeadm.go:581] duration metric: took 4.785728725s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:16.602545   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:16.785610   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:16.785646   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:16.785663   46866 node_conditions.go:105] duration metric: took 183.112914ms to run NodePressure ...
	I1205 20:57:16.785677   46866 start.go:228] waiting for startup goroutines ...
	I1205 20:57:16.785686   46866 start.go:233] waiting for cluster config update ...
	I1205 20:57:16.785705   46866 start.go:242] writing updated cluster config ...
	I1205 20:57:16.786062   46866 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:16.840981   46866 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1205 20:57:16.842980   46866 out.go:177] * Done! kubectl is now configured to use "no-preload-143651" cluster and "default" namespace by default
	I1205 20:57:14.049305   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:14.549423   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.050061   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.550221   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.049450   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.550094   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.049900   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.549923   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.050255   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.549399   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.615362   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.62253521s)
	I1205 20:57:19.615425   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:19.633203   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:57:19.643629   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:57:19.653655   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:57:19.653717   46374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:57:19.709748   46374 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:57:19.709836   46374 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:19.887985   46374 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:19.888143   46374 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:19.888243   46374 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:20.145182   46374 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:20.147189   46374 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:20.147319   46374 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:20.147389   46374 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:20.147482   46374 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:20.147875   46374 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:20.148583   46374 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:20.149486   46374 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:20.150362   46374 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:20.150974   46374 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:20.151523   46374 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:20.152166   46374 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:20.152419   46374 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:20.152504   46374 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:20.435395   46374 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:20.606951   46374 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:20.754435   46374 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:20.953360   46374 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:20.954288   46374 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:20.958413   46374 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:19.049689   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.549608   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.049856   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.550245   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.050001   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.549839   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.049908   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.549764   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.050204   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.550196   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.049420   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.550152   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.050103   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.202067   46700 kubeadm.go:1088] duration metric: took 16.108268519s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:25.202100   46700 kubeadm.go:406] StartCluster complete in 5m53.142100786s
	I1205 20:57:25.202121   46700 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.202211   46700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:25.204920   46700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.205284   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:25.205635   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:57:25.205792   46700 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:25.205865   46700 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-061206"
	I1205 20:57:25.205888   46700 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-061206"
	W1205 20:57:25.205896   46700 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:25.205954   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.205982   46700 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206011   46700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-061206"
	I1205 20:57:25.206429   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206436   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206457   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206459   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206517   46700 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206531   46700 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-061206"
	W1205 20:57:25.206538   46700 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:25.206578   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.206906   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206936   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.228876   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I1205 20:57:25.228902   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1205 20:57:25.229036   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I1205 20:57:25.229487   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229569   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229646   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.230209   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230230   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230413   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230426   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230468   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230492   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230851   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.231494   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.231520   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.231955   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.232544   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.232578   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.233084   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.233307   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.237634   46700 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-061206"
	W1205 20:57:25.237660   46700 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:25.237691   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.238103   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.238138   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.252274   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I1205 20:57:25.252709   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.253307   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.253327   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.253689   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.253874   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.255891   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.258376   46700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:25.256849   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I1205 20:57:25.260119   46700 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.260145   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:25.260168   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.261358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.262042   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.262063   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.262590   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.262765   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.265705   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.265905   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.267942   46700 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:25.266347   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.266528   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.269653   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.269661   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:25.269687   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:25.269708   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.270383   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.270602   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.270764   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.274415   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.274914   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.274939   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.275267   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.275451   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.275594   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.275736   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.282847   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1205 20:57:25.283552   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.284174   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.284192   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.284659   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.285434   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.285469   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.306845   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1205 20:57:25.307358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.307884   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.307905   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.308302   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.308605   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.310363   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.310649   46700 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.310663   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:25.310682   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.313904   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314451   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.314482   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314756   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.314941   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.315053   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.315153   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.456874   46700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-061206" context rescaled to 1 replicas
	I1205 20:57:25.456922   46700 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:25.459008   46700 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:20.960444   46374 out.go:204]   - Booting up control plane ...
	I1205 20:57:20.960603   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:20.960721   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:20.961220   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:20.981073   46374 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:20.982383   46374 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:20.982504   46374 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:57:21.127167   46374 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:25.460495   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:25.531367   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.531600   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:25.531618   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:25.543589   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.624622   46700 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.624655   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:25.660979   46700 node_ready.go:49] node "old-k8s-version-061206" has status "Ready":"True"
	I1205 20:57:25.661005   46700 node_ready.go:38] duration metric: took 36.286483ms waiting for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.661017   46700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:25.666179   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:25.666208   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:25.796077   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:26.018114   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.018141   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:26.124357   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.905138   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.37373154s)
	I1205 20:57:26.905210   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905526   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905553   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.905567   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905576   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:26.905905   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905917   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964563   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.964593   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.964920   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.964940   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964974   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465231   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.92160273s)
	I1205 20:57:27.465236   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.840348969s)
	I1205 20:57:27.465312   46700 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:27.465289   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465379   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.465718   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465761   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.465771   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.465780   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465790   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.467788   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.467820   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.467829   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628166   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.503702639s)
	I1205 20:57:27.628242   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628262   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628592   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628617   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628627   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628637   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628714   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.628851   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628866   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628885   46700 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-061206"
	I1205 20:57:27.632134   46700 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:27.634065   46700 addons.go:502] enable addons completed in 2.428270131s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:28.052082   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:29.630980   46374 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503524 seconds
	I1205 20:57:29.631109   46374 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:29.651107   46374 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:30.184174   46374 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:30.184401   46374 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-331495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:57:30.703275   46374 kubeadm.go:322] [bootstrap-token] Using token: 28cbrl.nve3765a0enwbcr0
	I1205 20:57:30.705013   46374 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:30.705155   46374 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:30.718386   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:57:30.727275   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:30.734448   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:30.741266   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:30.746706   46374 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:30.765198   46374 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:57:31.046194   46374 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:31.133417   46374 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:31.133438   46374 kubeadm.go:322] 
	I1205 20:57:31.133501   46374 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:31.133509   46374 kubeadm.go:322] 
	I1205 20:57:31.133647   46374 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:31.133667   46374 kubeadm.go:322] 
	I1205 20:57:31.133707   46374 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:31.133781   46374 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:31.133853   46374 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:31.133863   46374 kubeadm.go:322] 
	I1205 20:57:31.133918   46374 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:57:31.133925   46374 kubeadm.go:322] 
	I1205 20:57:31.133983   46374 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:57:31.133993   46374 kubeadm.go:322] 
	I1205 20:57:31.134042   46374 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:31.134103   46374 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:31.134262   46374 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:31.134300   46374 kubeadm.go:322] 
	I1205 20:57:31.134417   46374 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:57:31.134526   46374 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:31.134541   46374 kubeadm.go:322] 
	I1205 20:57:31.134671   46374 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.134823   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:31.134858   46374 kubeadm.go:322] 	--control-plane 
	I1205 20:57:31.134867   46374 kubeadm.go:322] 
	I1205 20:57:31.134986   46374 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:31.134997   46374 kubeadm.go:322] 
	I1205 20:57:31.135114   46374 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.135272   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:31.135908   46374 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:57:31.135934   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:57:31.135944   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:31.137845   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:30.540402   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:33.040756   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:31.139429   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:31.181897   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:31.202833   46374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:31.202901   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.202910   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=embed-certs-331495 minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.298252   46374 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:31.569929   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.694250   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.294912   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.795323   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.295495   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.794998   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.294843   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.794730   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:35.295505   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.538542   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.538568   46700 pod_ready.go:81] duration metric: took 8.742457359s waiting for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.538579   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.540738   46700 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540763   46700 pod_ready.go:81] duration metric: took 2.177251ms waiting for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:34.540771   46700 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540777   46700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545336   46700 pod_ready.go:92] pod "kube-proxy-j68qr" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.545360   46700 pod_ready.go:81] duration metric: took 4.576584ms waiting for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545370   46700 pod_ready.go:38] duration metric: took 8.884340587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:34.545387   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:34.545442   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:34.561744   46700 api_server.go:72] duration metric: took 9.104792218s to wait for apiserver process to appear ...
	I1205 20:57:34.561769   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:34.561786   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:57:34.568456   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:57:34.569584   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:57:34.569608   46700 api_server.go:131] duration metric: took 7.832231ms to wait for apiserver health ...
	I1205 20:57:34.569618   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:34.573936   46700 system_pods.go:59] 4 kube-system pods found
	I1205 20:57:34.573962   46700 system_pods.go:61] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.573969   46700 system_pods.go:61] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.573979   46700 system_pods.go:61] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.573989   46700 system_pods.go:61] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.574004   46700 system_pods.go:74] duration metric: took 4.378461ms to wait for pod list to return data ...
	I1205 20:57:34.574016   46700 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:34.577236   46700 default_sa.go:45] found service account: "default"
	I1205 20:57:34.577258   46700 default_sa.go:55] duration metric: took 3.232577ms for default service account to be created ...
	I1205 20:57:34.577268   46700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:34.581061   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.581080   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.581086   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.581093   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.581098   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.581112   46700 retry.go:31] will retry after 312.287284ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:34.898504   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.898531   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.898536   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.898545   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.898549   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.898563   46700 retry.go:31] will retry after 340.858289ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.244211   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.244237   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.244242   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.244249   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.244253   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.244267   46700 retry.go:31] will retry after 398.30611ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.649011   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.649042   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.649050   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.649061   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.649068   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.649086   46700 retry.go:31] will retry after 397.404602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.052047   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.052079   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.052087   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.052097   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.052105   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.052124   46700 retry.go:31] will retry after 604.681853ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.662177   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.662206   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.662213   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.662223   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.662229   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.662247   46700 retry.go:31] will retry after 732.227215ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:37.399231   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:37.399264   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:37.399272   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:37.399282   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:37.399289   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:37.399308   46700 retry.go:31] will retry after 1.17612773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.795241   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.295081   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.795352   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.295506   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.794785   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.294797   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.794948   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.295478   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.795706   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:40.295444   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.581173   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:38.581201   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:38.581207   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:38.581220   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:38.581225   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:38.581239   46700 retry.go:31] will retry after 1.118915645s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:39.704807   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:39.704835   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:39.704841   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:39.704847   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:39.704854   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:39.704872   46700 retry.go:31] will retry after 1.49556329s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:41.205278   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:41.205316   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:41.205324   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:41.205331   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:41.205336   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:41.205357   46700 retry.go:31] will retry after 2.273757829s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:43.485079   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:43.485109   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:43.485125   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:43.485132   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:43.485137   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:43.485153   46700 retry.go:31] will retry after 2.2120181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:40.794725   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.295631   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.795542   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.295514   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.795481   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.295525   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.795463   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.295442   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.451570   46374 kubeadm.go:1088] duration metric: took 13.248732973s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:44.451605   46374 kubeadm.go:406] StartCluster complete in 5m13.096778797s
	I1205 20:57:44.451631   46374 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.451730   46374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:44.454306   46374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.454587   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:44.454611   46374 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:44.454695   46374 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-331495"
	I1205 20:57:44.454720   46374 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-331495"
	W1205 20:57:44.454731   46374 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:44.454766   46374 addons.go:69] Setting default-storageclass=true in profile "embed-certs-331495"
	I1205 20:57:44.454781   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.454783   46374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-331495"
	I1205 20:57:44.454840   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:57:44.454884   46374 addons.go:69] Setting metrics-server=true in profile "embed-certs-331495"
	I1205 20:57:44.454899   46374 addons.go:231] Setting addon metrics-server=true in "embed-certs-331495"
	W1205 20:57:44.454907   46374 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:44.454949   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.455191   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455213   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455216   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455231   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455237   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455259   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.473063   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I1205 20:57:44.473083   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I1205 20:57:44.473135   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I1205 20:57:44.473509   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.473642   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474153   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474171   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474179   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474197   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474336   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474566   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474637   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474761   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474785   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474877   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.475234   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475260   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.475295   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.475833   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475871   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.478828   46374 addons.go:231] Setting addon default-storageclass=true in "embed-certs-331495"
	W1205 20:57:44.478852   46374 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:44.478882   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.479277   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.479311   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.493193   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1205 20:57:44.493380   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1205 20:57:44.493637   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.493775   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.494092   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494108   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494242   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494252   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494488   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494624   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.494682   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.496908   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.497156   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.498954   46374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:44.500583   46374 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:44.499205   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1205 20:57:44.502186   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:44.502199   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:44.502214   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.502313   46374 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.502329   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:44.502349   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.503728   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.504065   46374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-331495" context rescaled to 1 replicas
	I1205 20:57:44.504105   46374 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:44.505773   46374 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:44.507622   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:44.505350   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.507719   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.505638   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.507792   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.507821   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.506710   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.507399   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508237   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.508287   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508353   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.508369   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508440   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.508506   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.508671   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508678   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.508996   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.509016   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.509373   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.509567   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.525720   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I1205 20:57:44.526352   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.526817   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.526831   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.527096   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.527248   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.529415   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.529714   46374 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.529725   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:44.529737   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.532475   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533019   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.533042   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.533393   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.533527   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.533614   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.688130   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:44.688235   46374 node_ready.go:35] waiting up to 6m0s for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727420   46374 node_ready.go:49] node "embed-certs-331495" has status "Ready":"True"
	I1205 20:57:44.727442   46374 node_ready.go:38] duration metric: took 39.185885ms waiting for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727450   46374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:44.732130   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:44.732147   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:44.738201   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.771438   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.811415   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:44.811441   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:44.813276   46374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:44.891164   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:44.891188   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:44.982166   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:46.640482   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.952307207s)
	I1205 20:57:46.640514   46374 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:46.640492   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.902257941s)
	I1205 20:57:46.640549   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640567   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.640954   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.640974   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.640985   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640994   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.641299   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.641316   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.641317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669046   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.669072   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.669393   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669467   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.669486   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229043   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.457564146s)
	I1205 20:57:47.229106   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229122   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.229427   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.229442   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229451   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229460   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.230375   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:47.230383   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.230399   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.269645   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.287430037s)
	I1205 20:57:47.269701   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.269717   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270028   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270044   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270053   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.270062   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270370   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270387   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270397   46374 addons.go:467] Verifying addon metrics-server=true in "embed-certs-331495"
	I1205 20:57:47.272963   46374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:45.704352   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:45.704382   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:45.704392   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:45.704402   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:45.704408   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:45.704427   46700 retry.go:31] will retry after 3.581529213s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:47.274340   46374 addons.go:502] enable addons completed in 2.819728831s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:47.280325   46374 pod_ready.go:102] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:48.746184   46374 pod_ready.go:92] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.746205   46374 pod_ready.go:81] duration metric: took 3.932903963s waiting for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.746212   46374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752060   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.752078   46374 pod_ready.go:81] duration metric: took 5.859638ms waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752088   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757347   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.757367   46374 pod_ready.go:81] duration metric: took 5.273527ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757375   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762850   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.762869   46374 pod_ready.go:81] duration metric: took 5.4878ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762876   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767874   46374 pod_ready.go:92] pod "kube-proxy-tbr8k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.767896   46374 pod_ready.go:81] duration metric: took 5.013139ms waiting for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767907   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141813   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:49.141836   46374 pod_ready.go:81] duration metric: took 373.922185ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141844   46374 pod_ready.go:38] duration metric: took 4.414384404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:49.141856   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:49.141898   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:49.156536   46374 api_server.go:72] duration metric: took 4.652397468s to wait for apiserver process to appear ...
	I1205 20:57:49.156566   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:49.156584   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:57:49.162837   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:57:49.164588   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:57:49.164606   46374 api_server.go:131] duration metric: took 8.03498ms to wait for apiserver health ...
	I1205 20:57:49.164613   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:49.346033   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:49.346065   46374 system_pods.go:61] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.346069   46374 system_pods.go:61] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.346074   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.346079   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.346082   46374 system_pods.go:61] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.346086   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.346092   46374 system_pods.go:61] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.346098   46374 system_pods.go:61] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:57:49.346105   46374 system_pods.go:74] duration metric: took 181.48718ms to wait for pod list to return data ...
	I1205 20:57:49.346111   46374 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:49.541758   46374 default_sa.go:45] found service account: "default"
	I1205 20:57:49.541783   46374 default_sa.go:55] duration metric: took 195.666774ms for default service account to be created ...
	I1205 20:57:49.541791   46374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:49.746101   46374 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:49.746131   46374 system_pods.go:89] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.746136   46374 system_pods.go:89] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.746142   46374 system_pods.go:89] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.746147   46374 system_pods.go:89] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.746150   46374 system_pods.go:89] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.746155   46374 system_pods.go:89] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.746170   46374 system_pods.go:89] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.746175   46374 system_pods.go:89] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Running
	I1205 20:57:49.746183   46374 system_pods.go:126] duration metric: took 204.388635ms to wait for k8s-apps to be running ...
	I1205 20:57:49.746193   46374 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:49.746241   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:49.764758   46374 system_svc.go:56] duration metric: took 18.554759ms WaitForService to wait for kubelet.
	I1205 20:57:49.764784   46374 kubeadm.go:581] duration metric: took 5.260652386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:49.764801   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:49.942067   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:49.942095   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:49.942105   46374 node_conditions.go:105] duration metric: took 177.300297ms to run NodePressure ...
	I1205 20:57:49.942114   46374 start.go:228] waiting for startup goroutines ...
	I1205 20:57:49.942120   46374 start.go:233] waiting for cluster config update ...
	I1205 20:57:49.942129   46374 start.go:242] writing updated cluster config ...
	I1205 20:57:49.942407   46374 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:49.995837   46374 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:57:49.997691   46374 out.go:177] * Done! kubectl is now configured to use "embed-certs-331495" cluster and "default" namespace by default
	I1205 20:57:49.291672   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:49.291700   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:49.291705   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:49.291713   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.291718   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:49.291736   46700 retry.go:31] will retry after 3.015806566s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:52.313677   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:52.313703   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:52.313711   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:52.313721   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:52.313727   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:52.313747   46700 retry.go:31] will retry after 4.481475932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:56.804282   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:56.804308   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:56.804314   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:56.804321   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:56.804325   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:56.804340   46700 retry.go:31] will retry after 6.744179014s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:03.556623   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:58:03.556652   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:03.556660   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:03.556669   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:03.556676   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:03.556696   46700 retry.go:31] will retry after 7.974872066s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:11.540488   46700 system_pods.go:86] 6 kube-system pods found
	I1205 20:58:11.540516   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:11.540522   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Pending
	I1205 20:58:11.540526   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Pending
	I1205 20:58:11.540530   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:11.540537   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:11.540541   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:11.540556   46700 retry.go:31] will retry after 10.29278609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:21.841415   46700 system_pods.go:86] 7 kube-system pods found
	I1205 20:58:21.841442   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:21.841450   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:21.841457   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:21.841463   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:21.841468   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:21.841478   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:21.841485   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:21.841503   46700 retry.go:31] will retry after 10.997616244s: missing components: kube-scheduler
	I1205 20:58:32.846965   46700 system_pods.go:86] 8 kube-system pods found
	I1205 20:58:32.846999   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:32.847007   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:32.847016   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:32.847023   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:32.847028   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:32.847032   46700 system_pods.go:89] "kube-scheduler-old-k8s-version-061206" [e19a40ac-ac9b-4dc8-8ed3-c13da266bb88] Running
	I1205 20:58:32.847041   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:32.847049   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:32.847061   46700 system_pods.go:126] duration metric: took 58.26978612s to wait for k8s-apps to be running ...
	I1205 20:58:32.847074   46700 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:58:32.847122   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:58:32.866233   46700 system_svc.go:56] duration metric: took 19.150294ms WaitForService to wait for kubelet.
	I1205 20:58:32.866267   46700 kubeadm.go:581] duration metric: took 1m7.409317219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:58:32.866308   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:58:32.870543   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:58:32.870569   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:58:32.870581   46700 node_conditions.go:105] duration metric: took 4.266682ms to run NodePressure ...
	I1205 20:58:32.870604   46700 start.go:228] waiting for startup goroutines ...
	I1205 20:58:32.870630   46700 start.go:233] waiting for cluster config update ...
	I1205 20:58:32.870646   46700 start.go:242] writing updated cluster config ...
	I1205 20:58:32.870888   46700 ssh_runner.go:195] Run: rm -f paused
	I1205 20:58:32.922554   46700 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1205 20:58:32.924288   46700 out.go:177] 
	W1205 20:58:32.925788   46700 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1205 20:58:32.927148   46700 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1205 20:58:32.928730   46700 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-061206" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:51:54 UTC, ends at Tue 2023-12-05 21:05:58 UTC. --
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.086074304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810358086060785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0dc90a31-8977-4acf-9298-53413ac94d50 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.086848181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7883bfb2-0365-42b8-a628-da7cd971536f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.086923077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7883bfb2-0365-42b8-a628-da7cd971536f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.087164459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7883bfb2-0365-42b8-a628-da7cd971536f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.136907820Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=28f5437f-c2d8-4cb4-877f-1dbe52571d17 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.137003080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=28f5437f-c2d8-4cb4-877f-1dbe52571d17 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.138570044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=10b181a8-d017-4248-8319-5e12a6a46797 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.139158582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810358139144597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=10b181a8-d017-4248-8319-5e12a6a46797 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.140060084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fbef6332-df9d-4f1e-ba24-256f70caaec9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.140133405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fbef6332-df9d-4f1e-ba24-256f70caaec9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.140409220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fbef6332-df9d-4f1e-ba24-256f70caaec9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.183485253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fd2de95b-c61f-4a0e-93ca-6ac3e9b665f1 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.183666348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fd2de95b-c61f-4a0e-93ca-6ac3e9b665f1 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.185995092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d59231f3-34dd-4842-a571-f766675321e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.186717219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810358186694558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d59231f3-34dd-4842-a571-f766675321e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.188126000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a265f938-c091-4a83-984e-3784f48ed461 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.188274785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a265f938-c091-4a83-984e-3784f48ed461 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.188483776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a265f938-c091-4a83-984e-3784f48ed461 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.230510331Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0a0c35b5-7a67-43e2-9108-0c9ec19f7bf3 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.230607143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0a0c35b5-7a67-43e2-9108-0c9ec19f7bf3 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.232758050Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e6638e3f-0532-44f0-aba5-0e87b888c2c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.233149404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810358233135916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e6638e3f-0532-44f0-aba5-0e87b888c2c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.234082659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a717148-c758-4080-ae93-3a93b42dde4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.234156692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a717148-c758-4080-ae93-3a93b42dde4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:05:58 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:05:58.234431931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a717148-c758-4080-ae93-3a93b42dde4b name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a816a407fb68       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   50a0a6b4bb2f3       storage-provisioner
	fd9f43596f484       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   4b4aab3e67527       busybox
	95dae582422a9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   8cf1f37bacb06       coredns-5dd5756b68-6pmzf
	6c766515e85b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   50a0a6b4bb2f3       storage-provisioner
	15eee84995781       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   001db274a0604       kube-proxy-g4zct
	1eed3a831d6e9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   a68e05aae5679       etcd-default-k8s-diff-port-463614
	e019875171430       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   4834a14fc48a0       kube-scheduler-default-k8s-diff-port-463614
	fa3b51839f012       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   9d5fe60297906       kube-controller-manager-default-k8s-diff-port-463614
	fad43ea2e090b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   865c4d68c1b2f       kube-apiserver-default-k8s-diff-port-463614
	
	* 
	* ==> coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59433 - 17018 "HINFO IN 1264421714362086919.719460568605505053. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009575149s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-463614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-463614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=default-k8s-diff-port-463614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_46_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:46:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-463614
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 21:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:03:11 +0000   Tue, 05 Dec 2023 20:46:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:03:11 +0000   Tue, 05 Dec 2023 20:46:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:03:11 +0000   Tue, 05 Dec 2023 20:46:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:03:11 +0000   Tue, 05 Dec 2023 20:52:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    default-k8s-diff-port-463614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd9c01fd3ee04a0dbbc7cf967abdc193
	  System UUID:                bd9c01fd-3ee0-4a0d-bbc7-cf967abdc193
	  Boot ID:                    e373c9bb-46f6-4c58-b07a-48ad227830a0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-6pmzf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                 etcd-default-k8s-diff-port-463614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-463614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-463614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-g4zct                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-463614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-57f55c9bc5-676m6                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-463614 event: Registered Node default-k8s-diff-port-463614 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-463614 event: Registered Node default-k8s-diff-port-463614 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.082144] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.778746] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.201374] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.633567] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 20:52] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.111320] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.149758] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.120003] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.269746] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +18.008163] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[ +15.077639] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] <==
	* {"level":"warn","ts":"2023-12-05T20:52:32.136868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"433.566911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2023-12-05T20:52:32.136932Z","caller":"traceutil/trace.go:171","msg":"trace[2066266502] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:464; }","duration":"433.633302ms","start":"2023-12-05T20:52:31.703289Z","end":"2023-12-05T20:52:32.136923Z","steps":["trace[2066266502] 'agreement among raft nodes before linearized reading'  (duration: 433.537601ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:52:32.13696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:52:31.703283Z","time spent":"433.669933ms","remote":"127.0.0.1:43298","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":227,"request content":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" "}
	{"level":"warn","ts":"2023-12-05T20:52:32.718186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.525942ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9083070211743067521 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-g4zct\" mod_revision:441 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-g4zct\" value_size:4379 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-g4zct\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T20:52:32.718365Z","caller":"traceutil/trace.go:171","msg":"trace[1011290366] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:492; }","duration":"571.418403ms","start":"2023-12-05T20:52:32.146933Z","end":"2023-12-05T20:52:32.718352Z","steps":["trace[1011290366] 'read index received'  (duration: 398.523911ms)","trace[1011290366] 'applied index is now lower than readState.Index'  (duration: 172.893128ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T20:52:32.718554Z","caller":"traceutil/trace.go:171","msg":"trace[1158725620] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"571.955472ms","start":"2023-12-05T20:52:32.146587Z","end":"2023-12-05T20:52:32.718543Z","steps":["trace[1158725620] 'process raft request'  (duration: 398.991186ms)","trace[1158725620] 'compare'  (duration: 172.230371ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T20:52:32.718612Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:52:32.14658Z","time spent":"571.993996ms","remote":"127.0.0.1:43292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4430,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-g4zct\" mod_revision:441 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-g4zct\" value_size:4379 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-g4zct\" > >"}
	{"level":"warn","ts":"2023-12-05T20:52:32.718884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"571.988107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-463614.179e0ac2f8299d33\" ","response":"range_response_count:1 size:791"}
	{"level":"info","ts":"2023-12-05T20:52:32.718913Z","caller":"traceutil/trace.go:171","msg":"trace[1897601429] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-463614.179e0ac2f8299d33; range_end:; response_count:1; response_revision:465; }","duration":"572.017779ms","start":"2023-12-05T20:52:32.146886Z","end":"2023-12-05T20:52:32.718903Z","steps":["trace[1897601429] 'agreement among raft nodes before linearized reading'  (duration: 571.922744ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:52:32.718936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:52:32.146876Z","time spent":"572.054287ms","remote":"127.0.0.1:43268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":813,"request content":"key:\"/registry/events/default/default-k8s-diff-port-463614.179e0ac2f8299d33\" "}
	{"level":"warn","ts":"2023-12-05T20:52:32.719113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"569.386964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2023-12-05T20:52:32.719151Z","caller":"traceutil/trace.go:171","msg":"trace[755770909] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:465; }","duration":"569.427592ms","start":"2023-12-05T20:52:32.149715Z","end":"2023-12-05T20:52:32.719143Z","steps":["trace[755770909] 'agreement among raft nodes before linearized reading'  (duration: 569.322573ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:52:32.719175Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:52:32.149706Z","time spent":"569.462896ms","remote":"127.0.0.1:43298","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":227,"request content":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" "}
	{"level":"warn","ts":"2023-12-05T20:52:32.719765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"453.720729ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3975"}
	{"level":"info","ts":"2023-12-05T20:52:32.719802Z","caller":"traceutil/trace.go:171","msg":"trace[2037793298] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:465; }","duration":"453.761834ms","start":"2023-12-05T20:52:32.266032Z","end":"2023-12-05T20:52:32.719793Z","steps":["trace[2037793298] 'agreement among raft nodes before linearized reading'  (duration: 453.69844ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:52:32.719824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T20:52:32.266012Z","time spent":"453.806495ms","remote":"127.0.0.1:43354","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":3997,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"info","ts":"2023-12-05T20:52:32.964823Z","caller":"traceutil/trace.go:171","msg":"trace[1694111441] linearizableReadLoop","detail":"{readStateIndex:494; appliedIndex:493; }","duration":"191.26722ms","start":"2023-12-05T20:52:32.773535Z","end":"2023-12-05T20:52:32.964802Z","steps":["trace[1694111441] 'read index received'  (duration: 163.732306ms)","trace[1694111441] 'applied index is now lower than readState.Index'  (duration: 27.53355ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T20:52:32.964983Z","caller":"traceutil/trace.go:171","msg":"trace[1821347772] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"201.914962ms","start":"2023-12-05T20:52:32.763049Z","end":"2023-12-05T20:52:32.964964Z","steps":["trace[1821347772] 'process raft request'  (duration: 174.27759ms)","trace[1821347772] 'compare'  (duration: 27.250696ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T20:52:32.965148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.612091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-g4zct\" ","response":"range_response_count:1 size:4445"}
	{"level":"info","ts":"2023-12-05T20:52:32.966826Z","caller":"traceutil/trace.go:171","msg":"trace[1048216464] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-g4zct; range_end:; response_count:1; response_revision:466; }","duration":"193.206025ms","start":"2023-12-05T20:52:32.77351Z","end":"2023-12-05T20:52:32.966716Z","steps":["trace[1048216464] 'agreement among raft nodes before linearized reading'  (duration: 191.503782ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T20:52:32.965765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.33808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2023-12-05T20:52:32.967734Z","caller":"traceutil/trace.go:171","msg":"trace[1731130972] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:466; }","duration":"185.413272ms","start":"2023-12-05T20:52:32.782308Z","end":"2023-12-05T20:52:32.967721Z","steps":["trace[1731130972] 'agreement among raft nodes before linearized reading'  (duration: 183.195015ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:02:26.458961Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":798}
	{"level":"info","ts":"2023-12-05T21:02:26.461456Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":798,"took":"1.978655ms","hash":3976409391}
	{"level":"info","ts":"2023-12-05T21:02:26.461527Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3976409391,"revision":798,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  21:05:58 up 14 min,  0 users,  load average: 0.15, 0.15, 0.10
	Linux default-k8s-diff-port-463614 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] <==
	* I1205 21:02:28.356920       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:02:29.357730       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:02:29.357875       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:02:29.357916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:02:29.357752       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:02:29.358059       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:02:29.360061       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:03:28.190828       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:03:29.358552       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:03:29.358620       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:03:29.358632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:03:29.360841       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:03:29.361041       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:03:29.361089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:04:28.190900       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 21:05:28.190828       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:05:29.359699       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:05:29.359834       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:05:29.359867       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:05:29.361992       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:05:29.362101       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:05:29.362109       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] <==
	* I1205 21:00:14.207977       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:00:43.610828       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:00:44.217682       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:01:13.616386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:01:14.227132       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:01:43.622415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:01:44.236701       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:02:13.628435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:02:14.247287       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:02:43.636006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:02:44.257693       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:03:13.641550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:03:14.265993       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:03:42.731386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="529.763µs"
	E1205 21:03:43.648835       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:03:44.275427       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:03:56.730112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="199.682µs"
	E1205 21:04:13.656474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:04:14.284952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:04:43.663321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:04:44.294384       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:13.670325       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:14.302964       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:43.676369       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:44.313436       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] <==
	* I1205 20:52:29.919352       1 server_others.go:69] "Using iptables proxy"
	I1205 20:52:29.936504       1 node.go:141] Successfully retrieved node IP: 192.168.39.27
	I1205 20:52:30.023040       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:52:30.023149       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:52:30.026783       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:52:30.026893       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:52:30.027411       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:52:30.027653       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:52:30.028836       1 config.go:188] "Starting service config controller"
	I1205 20:52:30.028907       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:52:30.028959       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:52:30.028984       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:52:30.031092       1 config.go:315] "Starting node config controller"
	I1205 20:52:30.031148       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:52:30.130271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:52:30.130712       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:52:30.132269       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] <==
	* I1205 20:52:26.228126       1 serving.go:348] Generated self-signed cert in-memory
	W1205 20:52:28.343829       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:52:28.343939       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:52:28.343981       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:52:28.344015       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:52:28.395018       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1205 20:52:28.395047       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:52:28.397561       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:52:28.397723       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:52:28.398730       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:52:28.399036       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 20:52:28.498878       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:51:54 UTC, ends at Tue 2023-12-05 21:05:58 UTC. --
	Dec 05 21:03:21 default-k8s-diff-port-463614 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:03:27 default-k8s-diff-port-463614 kubelet[926]: E1205 21:03:27.725340     926 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 21:03:27 default-k8s-diff-port-463614 kubelet[926]: E1205 21:03:27.725469     926 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 21:03:27 default-k8s-diff-port-463614 kubelet[926]: E1205 21:03:27.725718     926 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h292q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-676m6_kube-system(dc304fd9-2922-42f7-b917-5618c6d43f8d): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:03:27 default-k8s-diff-port-463614 kubelet[926]: E1205 21:03:27.725760     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:03:42 default-k8s-diff-port-463614 kubelet[926]: E1205 21:03:42.711347     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:03:56 default-k8s-diff-port-463614 kubelet[926]: E1205 21:03:56.711424     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:04:08 default-k8s-diff-port-463614 kubelet[926]: E1205 21:04:08.710838     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:04:19 default-k8s-diff-port-463614 kubelet[926]: E1205 21:04:19.711512     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:04:21 default-k8s-diff-port-463614 kubelet[926]: E1205 21:04:21.729615     926 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:04:21 default-k8s-diff-port-463614 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:04:21 default-k8s-diff-port-463614 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:04:21 default-k8s-diff-port-463614 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:04:31 default-k8s-diff-port-463614 kubelet[926]: E1205 21:04:31.710439     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:04:45 default-k8s-diff-port-463614 kubelet[926]: E1205 21:04:45.711899     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:04:56 default-k8s-diff-port-463614 kubelet[926]: E1205 21:04:56.711371     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:05:08 default-k8s-diff-port-463614 kubelet[926]: E1205 21:05:08.710841     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:05:21 default-k8s-diff-port-463614 kubelet[926]: E1205 21:05:21.736077     926 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:05:21 default-k8s-diff-port-463614 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:05:21 default-k8s-diff-port-463614 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:05:21 default-k8s-diff-port-463614 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:05:23 default-k8s-diff-port-463614 kubelet[926]: E1205 21:05:23.712148     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:05:34 default-k8s-diff-port-463614 kubelet[926]: E1205 21:05:34.710490     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:05:45 default-k8s-diff-port-463614 kubelet[926]: E1205 21:05:45.712489     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:05:56 default-k8s-diff-port-463614 kubelet[926]: E1205 21:05:56.711906     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	
	* 
	* ==> storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] <==
	* I1205 20:53:01.059041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:53:01.076782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:53:01.076835       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:53:01.088963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:53:01.089280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-463614_09e621b0-c856-46e0-ad49-a5857e033895!
	I1205 20:53:01.090306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"283e2cb4-6883-45c7-9630-2d26c91f65d8", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-463614_09e621b0-c856-46e0-ad49-a5857e033895 became leader
	I1205 20:53:01.189783       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-463614_09e621b0-c856-46e0-ad49-a5857e033895!
	
	* 
	* ==> storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] <==
	* I1205 20:52:29.906546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 20:52:59.910790       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-676m6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 describe pod metrics-server-57f55c9bc5-676m6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-463614 describe pod metrics-server-57f55c9bc5-676m6: exit status 1 (72.808103ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-676m6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-463614 describe pod metrics-server-57f55c9bc5-676m6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 20:57:37.060147   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:57:46.651782   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143651 -n no-preload-143651
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:06:17.461159941 +0000 UTC m=+5490.273619577
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-143651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-143651 logs -n 25: (1.646259286s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-405510                                        | pause-405510                 | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-601680                              | stopped-upgrade-601680       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:49:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:49:16.268811   47365 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:49:16.269102   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269113   47365 out.go:309] Setting ErrFile to fd 2...
	I1205 20:49:16.269117   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269306   47365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:49:16.269873   47365 out.go:303] Setting JSON to false
	I1205 20:49:16.270847   47365 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5509,"bootTime":1701803847,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:49:16.270909   47365 start.go:138] virtualization: kvm guest
	I1205 20:49:16.273160   47365 out.go:177] * [default-k8s-diff-port-463614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:49:16.275265   47365 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:49:16.275288   47365 notify.go:220] Checking for updates...
	I1205 20:49:16.276797   47365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:49:16.278334   47365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:49:16.279902   47365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:49:16.281580   47365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:49:16.283168   47365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:49:16.285134   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:49:16.285533   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.285605   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.300209   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I1205 20:49:16.300585   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.301134   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.301159   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.301488   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.301644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.301873   47365 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:49:16.302164   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.302215   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.317130   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:49:16.317591   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.318064   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.318086   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.318475   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.318691   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.356580   47365 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:49:16.358350   47365 start.go:298] selected driver: kvm2
	I1205 20:49:16.358368   47365 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.358501   47365 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:49:16.359194   47365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.359276   47365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:49:16.374505   47365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:49:16.374939   47365 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:49:16.374999   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:49:16.375009   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:49:16.375022   47365 start_flags.go:323] config:
	{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-46361
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.375188   47365 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.377202   47365 out.go:177] * Starting control plane node default-k8s-diff-port-463614 in cluster default-k8s-diff-port-463614
	I1205 20:49:16.338499   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:19.410522   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:16.379191   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:49:16.379245   47365 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:49:16.379253   47365 cache.go:56] Caching tarball of preloaded images
	I1205 20:49:16.379352   47365 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:49:16.379364   47365 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:49:16.379500   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:49:16.379715   47365 start.go:365] acquiring machines lock for default-k8s-diff-port-463614: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:49:25.490576   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:28.562621   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:34.642596   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:37.714630   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:43.794573   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:46.866618   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:52.946521   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:56.018552   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:02.098566   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:05.170641   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:11.250570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:14.322507   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:20.402570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:23.474581   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:29.554568   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:32.626541   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:38.706589   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:41.778594   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:47.858626   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:50.930560   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:57.010496   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:00.082587   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:03.086325   46700 start.go:369] acquired machines lock for "old-k8s-version-061206" in 4m14.42699626s
	I1205 20:51:03.086377   46700 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:03.086392   46700 fix.go:54] fixHost starting: 
	I1205 20:51:03.086799   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:03.086835   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:03.101342   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1205 20:51:03.101867   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:03.102378   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:51:03.102403   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:03.102792   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:03.103003   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:03.103208   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:51:03.104894   46700 fix.go:102] recreateIfNeeded on old-k8s-version-061206: state=Stopped err=<nil>
	I1205 20:51:03.104914   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	W1205 20:51:03.105115   46700 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:03.106835   46700 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-061206" ...
	I1205 20:51:03.108621   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Start
	I1205 20:51:03.108840   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring networks are active...
	I1205 20:51:03.109627   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network default is active
	I1205 20:51:03.110007   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network mk-old-k8s-version-061206 is active
	I1205 20:51:03.110401   46700 main.go:141] libmachine: (old-k8s-version-061206) Getting domain xml...
	I1205 20:51:03.111358   46700 main.go:141] libmachine: (old-k8s-version-061206) Creating domain...
	I1205 20:51:03.084237   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:03.084288   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:51:03.086163   46374 machine.go:91] provisioned docker machine in 4m37.408875031s
	I1205 20:51:03.086199   46374 fix.go:56] fixHost completed within 4m37.430079633s
	I1205 20:51:03.086204   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 4m37.430101514s
	W1205 20:51:03.086231   46374 start.go:694] error starting host: provision: host is not running
	W1205 20:51:03.086344   46374 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:51:03.086356   46374 start.go:709] Will try again in 5 seconds ...
	I1205 20:51:04.367947   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting to get IP...
	I1205 20:51:04.368825   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.369277   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.369387   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.369246   47662 retry.go:31] will retry after 251.730796ms: waiting for machine to come up
	I1205 20:51:04.622984   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.623402   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.623431   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.623354   47662 retry.go:31] will retry after 383.862516ms: waiting for machine to come up
	I1205 20:51:05.008944   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.009308   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.009336   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.009237   47662 retry.go:31] will retry after 412.348365ms: waiting for machine to come up
	I1205 20:51:05.422846   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.423235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.423253   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.423198   47662 retry.go:31] will retry after 568.45875ms: waiting for machine to come up
	I1205 20:51:05.992882   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.993236   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.993264   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.993182   47662 retry.go:31] will retry after 494.410091ms: waiting for machine to come up
	I1205 20:51:06.488852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:06.489210   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:06.489235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:06.489151   47662 retry.go:31] will retry after 640.351521ms: waiting for machine to come up
	I1205 20:51:07.130869   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:07.131329   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:07.131355   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:07.131273   47662 retry.go:31] will retry after 1.164209589s: waiting for machine to come up
	I1205 20:51:08.296903   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:08.297333   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:08.297365   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:08.297280   47662 retry.go:31] will retry after 1.479760715s: waiting for machine to come up
	I1205 20:51:08.087457   46374 start.go:365] acquiring machines lock for embed-certs-331495: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:09.778949   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:09.779414   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:09.779435   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:09.779379   47662 retry.go:31] will retry after 1.577524888s: waiting for machine to come up
	I1205 20:51:11.359094   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:11.359468   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:11.359499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:11.359405   47662 retry.go:31] will retry after 1.742003001s: waiting for machine to come up
	I1205 20:51:13.103927   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:13.104416   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:13.104446   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:13.104365   47662 retry.go:31] will retry after 2.671355884s: waiting for machine to come up
	I1205 20:51:15.777050   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:15.777542   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:15.777573   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:15.777491   47662 retry.go:31] will retry after 2.435682478s: waiting for machine to come up
	I1205 20:51:18.214485   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:18.214943   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:18.214965   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:18.214920   47662 retry.go:31] will retry after 2.827460605s: waiting for machine to come up
	I1205 20:51:22.191314   46866 start.go:369] acquired machines lock for "no-preload-143651" in 4m16.377152417s
	I1205 20:51:22.191373   46866 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:22.191380   46866 fix.go:54] fixHost starting: 
	I1205 20:51:22.191764   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:22.191801   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:22.208492   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I1205 20:51:22.208882   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:22.209423   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:51:22.209448   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:22.209839   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:22.210041   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:22.210202   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:51:22.211737   46866 fix.go:102] recreateIfNeeded on no-preload-143651: state=Stopped err=<nil>
	I1205 20:51:22.211762   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	W1205 20:51:22.211960   46866 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:22.214319   46866 out.go:177] * Restarting existing kvm2 VM for "no-preload-143651" ...
	I1205 20:51:21.044392   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044931   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has current primary IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044953   46700 main.go:141] libmachine: (old-k8s-version-061206) Found IP for machine: 192.168.50.116
	I1205 20:51:21.044964   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserving static IP address...
	I1205 20:51:21.045337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.045357   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserved static IP address: 192.168.50.116
	I1205 20:51:21.045371   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | skip adding static IP to network mk-old-k8s-version-061206 - found existing host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"}
	I1205 20:51:21.045381   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting for SSH to be available...
	I1205 20:51:21.045398   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Getting to WaitForSSH function...
	I1205 20:51:21.047343   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047678   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.047719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047758   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH client type: external
	I1205 20:51:21.047789   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa (-rw-------)
	I1205 20:51:21.047817   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:21.047832   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | About to run SSH command:
	I1205 20:51:21.047841   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | exit 0
	I1205 20:51:21.134741   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:21.135100   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetConfigRaw
	I1205 20:51:21.135770   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.138325   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138656   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.138689   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138908   46700 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:51:21.139128   46700 machine.go:88] provisioning docker machine ...
	I1205 20:51:21.139147   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.139351   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139516   46700 buildroot.go:166] provisioning hostname "old-k8s-version-061206"
	I1205 20:51:21.139534   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139714   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.141792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142136   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.142163   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142294   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.142471   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142609   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142741   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.142868   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.143244   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.143264   46700 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061206 && echo "old-k8s-version-061206" | sudo tee /etc/hostname
	I1205 20:51:21.267170   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061206
	
	I1205 20:51:21.267193   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.270042   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270524   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.270556   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270749   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.270945   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271115   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.271407   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.271735   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.271752   46700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061206/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:21.391935   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:21.391959   46700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:21.391983   46700 buildroot.go:174] setting up certificates
	I1205 20:51:21.391994   46700 provision.go:83] configureAuth start
	I1205 20:51:21.392002   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.392264   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.395020   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.395375   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395517   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.397499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397760   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.397792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397937   46700 provision.go:138] copyHostCerts
	I1205 20:51:21.397994   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:21.398007   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:21.398090   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:21.398222   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:21.398234   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:21.398293   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:21.398383   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:21.398394   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:21.398432   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:21.398499   46700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061206 san=[192.168.50.116 192.168.50.116 localhost 127.0.0.1 minikube old-k8s-version-061206]
	I1205 20:51:21.465637   46700 provision.go:172] copyRemoteCerts
	I1205 20:51:21.465701   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:21.465737   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.468386   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468688   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.468719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468896   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.469092   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.469232   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.469349   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:21.555915   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:21.578545   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:51:21.603058   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:21.624769   46700 provision.go:86] duration metric: configureAuth took 232.761874ms
	I1205 20:51:21.624798   46700 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:21.624972   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:51:21.625065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.627589   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.627953   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.627991   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.628085   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.628300   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628477   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628643   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.628867   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.629237   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.629262   46700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:21.945366   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:21.945398   46700 machine.go:91] provisioned docker machine in 806.257704ms
	I1205 20:51:21.945410   46700 start.go:300] post-start starting for "old-k8s-version-061206" (driver="kvm2")
	I1205 20:51:21.945423   46700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:21.945442   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.945803   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:21.945833   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.948699   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949083   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.949116   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949247   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.949455   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.949642   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.949780   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.036694   46700 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:22.040857   46700 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:22.040887   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:22.040961   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:22.041067   46700 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:22.041167   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:22.050610   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:22.072598   46700 start.go:303] post-start completed in 127.17514ms
	I1205 20:51:22.072621   46700 fix.go:56] fixHost completed within 18.986227859s
	I1205 20:51:22.072650   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.075382   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.075779   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.075809   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.076014   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.076218   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076390   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076548   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.076677   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:22.076979   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:22.076989   46700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:22.191127   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809482.140720971
	
	I1205 20:51:22.191150   46700 fix.go:206] guest clock: 1701809482.140720971
	I1205 20:51:22.191160   46700 fix.go:219] Guest: 2023-12-05 20:51:22.140720971 +0000 UTC Remote: 2023-12-05 20:51:22.072625275 +0000 UTC m=+273.566123117 (delta=68.095696ms)
	I1205 20:51:22.191206   46700 fix.go:190] guest clock delta is within tolerance: 68.095696ms
	I1205 20:51:22.191211   46700 start.go:83] releasing machines lock for "old-k8s-version-061206", held for 19.104851926s
	I1205 20:51:22.191239   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.191530   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:22.194285   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194676   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.194721   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194832   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195352   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195535   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195614   46700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:22.195660   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.195729   46700 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:22.195759   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.198085   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198438   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198493   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198522   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198619   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.198813   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.198893   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198922   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198980   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.199139   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.199172   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.199274   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199426   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.284598   46700 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:22.304917   46700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:22.454449   46700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:22.461344   46700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:22.461409   46700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:22.483106   46700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:22.483130   46700 start.go:475] detecting cgroup driver to use...
	I1205 20:51:22.483202   46700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:22.498157   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:22.510661   46700 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:22.510712   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:22.525004   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:22.538499   46700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:22.652874   46700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:22.787215   46700 docker.go:219] disabling docker service ...
	I1205 20:51:22.787272   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:22.800315   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:22.812031   46700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:22.926202   46700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:23.057043   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:23.072205   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:23.092858   46700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:51:23.092916   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.103613   46700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:23.103680   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.113992   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.124132   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.134007   46700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:23.144404   46700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:23.153679   46700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:23.153735   46700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:23.167935   46700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:23.178944   46700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:23.294314   46700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:23.469887   46700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:23.469957   46700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:23.475308   46700 start.go:543] Will wait 60s for crictl version
	I1205 20:51:23.475384   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:23.479436   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:23.520140   46700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:23.520223   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.572184   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.619296   46700 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1205 20:51:22.215866   46866 main.go:141] libmachine: (no-preload-143651) Calling .Start
	I1205 20:51:22.216026   46866 main.go:141] libmachine: (no-preload-143651) Ensuring networks are active...
	I1205 20:51:22.216719   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network default is active
	I1205 20:51:22.217060   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network mk-no-preload-143651 is active
	I1205 20:51:22.217553   46866 main.go:141] libmachine: (no-preload-143651) Getting domain xml...
	I1205 20:51:22.218160   46866 main.go:141] libmachine: (no-preload-143651) Creating domain...
	I1205 20:51:23.560327   46866 main.go:141] libmachine: (no-preload-143651) Waiting to get IP...
	I1205 20:51:23.561191   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.561601   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.561675   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.561566   47785 retry.go:31] will retry after 269.644015ms: waiting for machine to come up
	I1205 20:51:23.833089   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.833656   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.833695   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.833612   47785 retry.go:31] will retry after 363.018928ms: waiting for machine to come up
	I1205 20:51:24.198250   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.198767   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.198797   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.198717   47785 retry.go:31] will retry after 464.135158ms: waiting for machine to come up
	I1205 20:51:24.664518   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.664945   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.664970   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.664902   47785 retry.go:31] will retry after 383.704385ms: waiting for machine to come up
	I1205 20:51:25.050654   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.051112   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.051142   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.051078   47785 retry.go:31] will retry after 620.614799ms: waiting for machine to come up
	I1205 20:51:25.672997   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.673452   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.673485   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.673394   47785 retry.go:31] will retry after 594.447783ms: waiting for machine to come up
	I1205 20:51:23.620743   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:23.623372   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623672   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:23.623702   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623934   46700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:23.628382   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:23.642698   46700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 20:51:23.642770   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:23.686679   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:23.686776   46700 ssh_runner.go:195] Run: which lz4
	I1205 20:51:23.690994   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:51:23.695445   46700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:51:23.695480   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1205 20:51:25.519961   46700 crio.go:444] Took 1.828999 seconds to copy over tarball
	I1205 20:51:25.520052   46700 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:51:28.545261   46700 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025151809s)
	I1205 20:51:28.545291   46700 crio.go:451] Took 3.025302 seconds to extract the tarball
	I1205 20:51:28.545303   46700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:51:26.269269   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:26.269771   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:26.269815   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:26.269741   47785 retry.go:31] will retry after 872.968768ms: waiting for machine to come up
	I1205 20:51:27.144028   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:27.144505   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:27.144538   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:27.144467   47785 retry.go:31] will retry after 1.067988446s: waiting for machine to come up
	I1205 20:51:28.213709   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:28.214161   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:28.214184   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:28.214111   47785 retry.go:31] will retry after 1.483033238s: waiting for machine to come up
	I1205 20:51:29.699402   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:29.699928   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:29.699973   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:29.699861   47785 retry.go:31] will retry after 1.985034944s: waiting for machine to come up
	I1205 20:51:28.586059   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:28.631610   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:28.631643   46700 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:28.631749   46700 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.631797   46700 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.631754   46700 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.631937   46700 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.632007   46700 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:51:28.631930   46700 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.632029   46700 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.631760   46700 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633385   46700 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633397   46700 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:51:28.633416   46700 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.633494   46700 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.633496   46700 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.633512   46700 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.633518   46700 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.633497   46700 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.789873   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.811118   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.811610   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.818440   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.818470   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1205 20:51:28.820473   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.849060   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.855915   46700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1205 20:51:28.855966   46700 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.856023   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953211   46700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1205 20:51:28.953261   46700 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.953289   46700 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1205 20:51:28.953315   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953325   46700 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.953363   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.968680   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.992735   46700 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1205 20:51:28.992781   46700 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1205 20:51:28.992825   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992847   46700 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1205 20:51:28.992878   46700 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.992907   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992917   46700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1205 20:51:28.992830   46700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1205 20:51:28.992948   46700 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.992980   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.992994   46700 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.993009   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.993029   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992944   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.993064   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:29.193946   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:29.194040   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1205 20:51:29.194095   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1205 20:51:29.194188   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1205 20:51:29.194217   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1205 20:51:29.194257   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:29.194279   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1205 20:51:29.299767   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:51:29.299772   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1205 20:51:29.299836   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1205 20:51:29.299855   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1205 20:51:29.299870   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.304934   46700 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1205 20:51:29.304952   46700 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.305004   46700 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1205 20:51:31.467263   46700 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.162226207s)
	I1205 20:51:31.467295   46700 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1205 20:51:31.467342   46700 cache_images.go:92] LoadImages completed in 2.835682781s
	W1205 20:51:31.467425   46700 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1205 20:51:31.467515   46700 ssh_runner.go:195] Run: crio config
	I1205 20:51:31.527943   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:31.527968   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:31.527989   46700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:51:31.528016   46700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061206 NodeName:old-k8s-version-061206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:51:31.528162   46700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-061206"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-061206
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.116:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:51:31.528265   46700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-061206 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:51:31.528332   46700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1205 20:51:31.538013   46700 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:51:31.538090   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:51:31.547209   46700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:51:31.565720   46700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:51:31.582290   46700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1205 20:51:31.599081   46700 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I1205 20:51:31.603007   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:31.615348   46700 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206 for IP: 192.168.50.116
	I1205 20:51:31.615385   46700 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:31.615582   46700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:51:31.615657   46700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:51:31.615757   46700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.key
	I1205 20:51:31.615846   46700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key.ae4cb88a
	I1205 20:51:31.615902   46700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key
	I1205 20:51:31.616079   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:51:31.616150   46700 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:51:31.616172   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:51:31.616216   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:51:31.616261   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:51:31.616302   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:51:31.616375   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:31.617289   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:51:31.645485   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:51:31.675015   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:51:31.699520   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:51:31.727871   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:51:31.751623   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:51:31.776679   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:51:31.799577   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:51:31.827218   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:51:31.849104   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:51:31.870931   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:51:31.894940   46700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:51:31.912233   46700 ssh_runner.go:195] Run: openssl version
	I1205 20:51:31.918141   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:51:31.928422   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932915   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932985   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.938327   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:51:31.948580   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:51:31.958710   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963091   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963155   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.968667   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:51:31.981987   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:51:31.995793   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001622   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001709   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.008883   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:51:32.021378   46700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:51:32.025902   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:51:32.031917   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:51:32.037649   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:51:32.043121   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:51:32.048806   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:51:32.054266   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:51:32.060014   46700 kubeadm.go:404] StartCluster: {Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:51:32.060131   46700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:51:32.060186   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:32.101244   46700 cri.go:89] found id: ""
	I1205 20:51:32.101317   46700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:51:32.111900   46700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:51:32.111925   46700 kubeadm.go:636] restartCluster start
	I1205 20:51:32.111989   46700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:51:32.121046   46700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.122654   46700 kubeconfig.go:92] found "old-k8s-version-061206" server: "https://192.168.50.116:8443"
	I1205 20:51:32.126231   46700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:51:32.135341   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.135404   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.147308   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.147325   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.147367   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.158453   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.659254   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.659357   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.672490   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:33.159599   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.159693   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.171948   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:31.688072   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:31.688591   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:31.688627   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:31.688516   47785 retry.go:31] will retry after 1.83172898s: waiting for machine to come up
	I1205 20:51:33.521647   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:33.522137   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:33.522167   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:33.522083   47785 retry.go:31] will retry after 3.41334501s: waiting for machine to come up
	I1205 20:51:33.659273   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.659359   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.675427   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.158981   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.159075   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.173025   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.659439   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.659547   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.672184   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.159408   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.159472   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.173149   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.659490   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.659626   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.673261   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.159480   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.159569   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.172185   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.659417   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.659528   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.675853   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.159404   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.159495   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.172824   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.659361   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.659456   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.671599   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:38.158754   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.158834   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.171170   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.939441   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:36.939880   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:36.939905   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:36.939843   47785 retry.go:31] will retry after 3.715659301s: waiting for machine to come up
	I1205 20:51:40.659432   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659901   46866 main.go:141] libmachine: (no-preload-143651) Found IP for machine: 192.168.61.162
	I1205 20:51:40.659937   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has current primary IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659973   46866 main.go:141] libmachine: (no-preload-143651) Reserving static IP address...
	I1205 20:51:40.660324   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.660352   46866 main.go:141] libmachine: (no-preload-143651) Reserved static IP address: 192.168.61.162
	I1205 20:51:40.660372   46866 main.go:141] libmachine: (no-preload-143651) DBG | skip adding static IP to network mk-no-preload-143651 - found existing host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"}
	I1205 20:51:40.660391   46866 main.go:141] libmachine: (no-preload-143651) DBG | Getting to WaitForSSH function...
	I1205 20:51:40.660407   46866 main.go:141] libmachine: (no-preload-143651) Waiting for SSH to be available...
	I1205 20:51:40.662619   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663014   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.663042   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663226   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH client type: external
	I1205 20:51:40.663257   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa (-rw-------)
	I1205 20:51:40.663293   46866 main.go:141] libmachine: (no-preload-143651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:40.663312   46866 main.go:141] libmachine: (no-preload-143651) DBG | About to run SSH command:
	I1205 20:51:40.663328   46866 main.go:141] libmachine: (no-preload-143651) DBG | exit 0
	I1205 20:51:41.891099   47365 start.go:369] acquired machines lock for "default-k8s-diff-port-463614" in 2m25.511348838s
	I1205 20:51:41.891167   47365 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:41.891179   47365 fix.go:54] fixHost starting: 
	I1205 20:51:41.891625   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:41.891666   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:41.910556   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1205 20:51:41.910956   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:41.911447   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:51:41.911474   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:41.911792   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:41.912020   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:51:41.912168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:51:41.913796   47365 fix.go:102] recreateIfNeeded on default-k8s-diff-port-463614: state=Stopped err=<nil>
	I1205 20:51:41.913824   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	W1205 20:51:41.914032   47365 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:41.916597   47365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-463614" ...
	I1205 20:51:40.754683   46866 main.go:141] libmachine: (no-preload-143651) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:40.755055   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetConfigRaw
	I1205 20:51:40.755663   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:40.758165   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758502   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.758534   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758722   46866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:51:40.758916   46866 machine.go:88] provisioning docker machine ...
	I1205 20:51:40.758933   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:40.759160   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759358   46866 buildroot.go:166] provisioning hostname "no-preload-143651"
	I1205 20:51:40.759384   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759555   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.762125   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762513   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.762546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762688   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.762894   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763070   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763211   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.763392   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.763747   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.763761   46866 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143651 && echo "no-preload-143651" | sudo tee /etc/hostname
	I1205 20:51:40.895095   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143651
	
	I1205 20:51:40.895123   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.897864   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898199   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.898236   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898419   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.898629   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898814   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898972   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.899147   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.899454   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.899472   46866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143651/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:41.027721   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:41.027758   46866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:41.027802   46866 buildroot.go:174] setting up certificates
	I1205 20:51:41.027813   46866 provision.go:83] configureAuth start
	I1205 20:51:41.027827   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:41.028120   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.031205   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031561   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.031592   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031715   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.034163   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034531   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.034563   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034697   46866 provision.go:138] copyHostCerts
	I1205 20:51:41.034750   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:41.034767   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:41.034826   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:41.034918   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:41.034925   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:41.034947   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:41.035018   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:41.035029   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:41.035056   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:41.035129   46866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.no-preload-143651 san=[192.168.61.162 192.168.61.162 localhost 127.0.0.1 minikube no-preload-143651]
	I1205 20:51:41.152743   46866 provision.go:172] copyRemoteCerts
	I1205 20:51:41.152808   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:41.152836   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.155830   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156153   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.156181   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156380   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.156587   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.156769   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.156914   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.247182   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 20:51:41.271756   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:41.296485   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:41.317870   46866 provision.go:86] duration metric: configureAuth took 290.041804ms
	I1205 20:51:41.317900   46866 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:41.318059   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:51:41.318130   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.320631   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.320907   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.320935   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.321099   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.321310   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321436   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321558   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.321671   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.321981   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.321998   46866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:41.637500   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:41.637536   46866 machine.go:91] provisioned docker machine in 878.607379ms
	I1205 20:51:41.637551   46866 start.go:300] post-start starting for "no-preload-143651" (driver="kvm2")
	I1205 20:51:41.637565   46866 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:41.637586   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.637928   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:41.637959   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.640546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.640941   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.640969   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.641158   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.641348   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.641521   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.641701   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.733255   46866 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:41.737558   46866 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:41.737582   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:41.737656   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:41.737747   46866 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:41.737867   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:41.747400   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:41.769318   46866 start.go:303] post-start completed in 131.753103ms
	I1205 20:51:41.769341   46866 fix.go:56] fixHost completed within 19.577961747s
	I1205 20:51:41.769360   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.772098   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772433   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.772469   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772614   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.772830   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773000   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773141   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.773329   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.773689   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.773701   46866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:41.890932   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809501.865042950
	
	I1205 20:51:41.890965   46866 fix.go:206] guest clock: 1701809501.865042950
	I1205 20:51:41.890977   46866 fix.go:219] Guest: 2023-12-05 20:51:41.86504295 +0000 UTC Remote: 2023-12-05 20:51:41.769344785 +0000 UTC m=+276.111345943 (delta=95.698165ms)
	I1205 20:51:41.891000   46866 fix.go:190] guest clock delta is within tolerance: 95.698165ms
	I1205 20:51:41.891005   46866 start.go:83] releasing machines lock for "no-preload-143651", held for 19.699651094s
	I1205 20:51:41.891037   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.891349   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.893760   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894151   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.894188   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894393   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.894953   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895147   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895233   46866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:41.895275   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.895379   46866 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:41.895409   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.897961   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898107   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898353   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898396   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898610   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898663   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898693   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898781   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.898835   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.899138   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.899149   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.899296   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.987662   46866 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:42.008983   46866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:42.150028   46866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:42.156643   46866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:42.156719   46866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:42.175508   46866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:42.175534   46866 start.go:475] detecting cgroup driver to use...
	I1205 20:51:42.175620   46866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:42.189808   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:42.202280   46866 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:42.202342   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:42.220906   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:42.238796   46866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:42.364162   46866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:42.493990   46866 docker.go:219] disabling docker service ...
	I1205 20:51:42.494066   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:42.507419   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:42.519769   46866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:42.639608   46866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:42.764015   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:42.776984   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:42.797245   46866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:51:42.797307   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.807067   46866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:42.807150   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.816699   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.825896   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.835144   46866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:42.844910   46866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:42.853054   46866 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:42.853127   46866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:42.865162   46866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:42.874929   46866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:42.989397   46866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:43.173537   46866 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:43.173613   46866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:43.179392   46866 start.go:543] Will wait 60s for crictl version
	I1205 20:51:43.179449   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.183693   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:43.233790   46866 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:43.233862   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.291711   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.343431   46866 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1205 20:51:38.658807   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.658875   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.672580   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.159258   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.159363   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.172800   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.659451   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.659544   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.673718   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.159346   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.159436   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.172524   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.659093   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.659170   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.671848   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.159453   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.159534   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.171845   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.659456   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.659520   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.671136   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:42.136008   46700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:51:42.136039   46700 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:51:42.136049   46700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:51:42.136130   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:42.183279   46700 cri.go:89] found id: ""
	I1205 20:51:42.183375   46700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:51:42.202550   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:51:42.213978   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:51:42.214041   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223907   46700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223932   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:42.349280   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.257422   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.483371   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.345205   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:43.348398   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348738   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:43.348769   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348965   46866 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:43.354536   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:43.368512   46866 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 20:51:43.368550   46866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:43.411924   46866 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1205 20:51:43.411956   46866 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:43.412050   46866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.412030   46866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.412084   46866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.412097   46866 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1205 20:51:43.412134   46866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.412072   46866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.412021   46866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.412056   46866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413334   46866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.413403   46866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413481   46866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.413539   46866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.413554   46866 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1205 20:51:43.413337   46866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.413624   46866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.413405   46866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.563942   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.565063   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.567071   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.572782   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.577279   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.579820   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1205 20:51:43.591043   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.735723   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.735988   46866 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1205 20:51:43.736032   46866 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.736073   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.791375   46866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1205 20:51:43.791424   46866 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.791473   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.810236   46866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1205 20:51:43.810290   46866 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.810339   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841046   46866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1205 20:51:43.841255   46866 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.841347   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841121   46866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1205 20:51:43.841565   46866 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.841635   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866289   46866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1205 20:51:43.866344   46866 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.866368   46866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:51:43.866390   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866417   46866 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.866465   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866469   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.866597   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.866685   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.866780   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.866853   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.994581   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994691   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994757   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1205 20:51:43.994711   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.994792   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.994849   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:44.000411   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.000501   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.008960   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1205 20:51:44.009001   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:44.073217   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073238   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073275   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1205 20:51:44.073282   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073304   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073376   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:51:44.073397   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073439   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073444   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:44.073471   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1205 20:51:44.073504   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1205 20:51:41.918223   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Start
	I1205 20:51:41.918414   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring networks are active...
	I1205 20:51:41.919085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network default is active
	I1205 20:51:41.919401   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network mk-default-k8s-diff-port-463614 is active
	I1205 20:51:41.919733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Getting domain xml...
	I1205 20:51:41.920368   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Creating domain...
	I1205 20:51:43.304717   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting to get IP...
	I1205 20:51:43.305837   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.306202   47900 retry.go:31] will retry after 208.55347ms: waiting for machine to come up
	I1205 20:51:43.516782   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517269   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517297   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.517232   47900 retry.go:31] will retry after 370.217439ms: waiting for machine to come up
	I1205 20:51:43.889085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889580   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889615   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.889531   47900 retry.go:31] will retry after 395.420735ms: waiting for machine to come up
	I1205 20:51:44.286007   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286563   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.286481   47900 retry.go:31] will retry after 437.496548ms: waiting for machine to come up
	I1205 20:51:44.726145   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726803   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726850   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.726748   47900 retry.go:31] will retry after 628.791518ms: waiting for machine to come up
	I1205 20:51:45.357823   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358285   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:45.358232   47900 retry.go:31] will retry after 661.164562ms: waiting for machine to come up
	I1205 20:51:46.021711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022151   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022177   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:46.022120   47900 retry.go:31] will retry after 1.093521736s: waiting for machine to come up
	I1205 20:51:43.607841   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.765000   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:51:43.765097   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:43.776916   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.306400   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.805894   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.305832   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.332834   46700 api_server.go:72] duration metric: took 1.567832932s to wait for apiserver process to appear ...
	I1205 20:51:45.332867   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:51:45.332884   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:46.537183   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.463870183s)
	I1205 20:51:46.537256   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1205 20:51:46.537311   46866 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:46.537336   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.46384231s)
	I1205 20:51:46.537260   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.463842778s)
	I1205 20:51:46.537373   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:51:46.537394   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1205 20:51:46.537411   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:50.326248   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.788789868s)
	I1205 20:51:50.326299   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1205 20:51:50.326337   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:50.326419   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:47.117386   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117831   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:47.117800   47900 retry.go:31] will retry after 1.255113027s: waiting for machine to come up
	I1205 20:51:48.375199   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375692   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:48.375655   47900 retry.go:31] will retry after 1.65255216s: waiting for machine to come up
	I1205 20:51:50.029505   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029904   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029933   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:50.029860   47900 retry.go:31] will retry after 2.072960988s: waiting for machine to come up
	I1205 20:51:50.334417   46700 api_server.go:269] stopped: https://192.168.50.116:8443/healthz: Get "https://192.168.50.116:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:51:50.334459   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.286979   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:51:52.287013   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:51:52.787498   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.871766   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:52.871803   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.287974   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.301921   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:53.301962   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.787781   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.799426   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:51:53.809064   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:51:53.809101   46700 api_server.go:131] duration metric: took 8.476226007s to wait for apiserver health ...
	I1205 20:51:53.809112   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:53.809120   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:53.811188   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:51:53.496825   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.170377466s)
	I1205 20:51:53.496856   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1205 20:51:53.496877   46866 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:53.496925   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:55.657835   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.160865472s)
	I1205 20:51:55.657869   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1205 20:51:55.657898   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:55.657955   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:52.104758   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105274   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105301   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:52.105232   47900 retry.go:31] will retry after 2.172151449s: waiting for machine to come up
	I1205 20:51:54.279576   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280091   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:54.280054   47900 retry.go:31] will retry after 3.042324499s: waiting for machine to come up
	I1205 20:51:53.812841   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:51:53.835912   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:51:53.920892   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:51:53.943982   46700 system_pods.go:59] 7 kube-system pods found
	I1205 20:51:53.944026   46700 system_pods.go:61] "coredns-5644d7b6d9-kqhgk" [473e53e3-a0bd-4dcb-88c1-d61e9cc3e686] Running
	I1205 20:51:53.944034   46700 system_pods.go:61] "etcd-old-k8s-version-061206" [a2a6a459-41a3-49e3-b32e-a091317390ea] Running
	I1205 20:51:53.944041   46700 system_pods.go:61] "kube-apiserver-old-k8s-version-061206" [9cf24995-fccb-47e4-8d4a-870198b7c82f] Running
	I1205 20:51:53.944054   46700 system_pods.go:61] "kube-controller-manager-old-k8s-version-061206" [225a4a8b-2b6e-46f4-8bd9-9a375b05c23c] Pending
	I1205 20:51:53.944061   46700 system_pods.go:61] "kube-proxy-r5n6g" [5db8876d-ecff-40b3-a61d-aeaf7870166c] Running
	I1205 20:51:53.944068   46700 system_pods.go:61] "kube-scheduler-old-k8s-version-061206" [de56d925-45b3-4c36-b2c2-c90938793aa2] Running
	I1205 20:51:53.944075   46700 system_pods.go:61] "storage-provisioner" [d5d57d93-f94b-4a3e-8c65-25cd4d71b9d5] Running
	I1205 20:51:53.944083   46700 system_pods.go:74] duration metric: took 23.165628ms to wait for pod list to return data ...
	I1205 20:51:53.944093   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:51:53.956907   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:51:53.956949   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:51:53.956964   46700 node_conditions.go:105] duration metric: took 12.864098ms to run NodePressure ...
	I1205 20:51:53.956986   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:54.482145   46700 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:51:54.492629   46700 retry.go:31] will retry after 326.419845ms: kubelet not initialised
	I1205 20:51:54.826701   46700 retry.go:31] will retry after 396.475289ms: kubelet not initialised
	I1205 20:51:55.228971   46700 retry.go:31] will retry after 752.153604ms: kubelet not initialised
	I1205 20:51:55.987713   46700 retry.go:31] will retry after 881.822561ms: kubelet not initialised
	I1205 20:51:56.877407   46700 retry.go:31] will retry after 824.757816ms: kubelet not initialised
	I1205 20:51:57.707927   46700 retry.go:31] will retry after 2.392241385s: kubelet not initialised
	I1205 20:51:58.643374   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.985387711s)
	I1205 20:51:58.643408   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1205 20:51:58.643434   46866 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:58.643500   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:59.407245   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:51:59.407282   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:59.407333   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:57.324016   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324534   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324565   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:57.324482   47900 retry.go:31] will retry after 3.449667479s: waiting for machine to come up
	I1205 20:52:00.776644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777141   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Found IP for machine: 192.168.39.27
	I1205 20:52:00.777175   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777186   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserving static IP address...
	I1205 20:52:00.777825   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserved static IP address: 192.168.39.27
	I1205 20:52:00.777878   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.777892   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for SSH to be available...
	I1205 20:52:00.777918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | skip adding static IP to network mk-default-k8s-diff-port-463614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"}
	I1205 20:52:00.777929   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Getting to WaitForSSH function...
	I1205 20:52:00.780317   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.780729   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH client type: external
	I1205 20:52:00.780909   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa (-rw-------)
	I1205 20:52:00.780940   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:00.780959   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | About to run SSH command:
	I1205 20:52:00.780980   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | exit 0
	I1205 20:52:00.922857   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:00.923204   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetConfigRaw
	I1205 20:52:00.923973   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:00.927405   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.927885   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.927918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.928217   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:52:00.928469   47365 machine.go:88] provisioning docker machine ...
	I1205 20:52:00.928497   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:00.928735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.928912   47365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-463614"
	I1205 20:52:00.928938   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.929092   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:00.931664   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932096   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.932130   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:00.932496   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932672   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932822   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:00.932990   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:00.933401   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:00.933420   47365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-463614 && echo "default-k8s-diff-port-463614" | sudo tee /etc/hostname
	I1205 20:52:01.078295   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-463614
	
	I1205 20:52:01.078332   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.081604   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082051   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.082079   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082240   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.082492   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.083034   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.083506   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.083535   47365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-463614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-463614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-463614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:01.215856   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:01.215884   47365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:01.215912   47365 buildroot.go:174] setting up certificates
	I1205 20:52:01.215927   47365 provision.go:83] configureAuth start
	I1205 20:52:01.215947   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:01.216246   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:01.219169   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219465   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.219503   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.221768   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222137   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.222171   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222410   47365 provision.go:138] copyHostCerts
	I1205 20:52:01.222493   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:01.222508   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:01.222568   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:01.222686   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:01.222717   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:01.222757   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:01.222825   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:01.222832   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:01.222856   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:01.222921   47365 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-463614 san=[192.168.39.27 192.168.39.27 localhost 127.0.0.1 minikube default-k8s-diff-port-463614]
	I1205 20:52:02.247282   46374 start.go:369] acquired machines lock for "embed-certs-331495" in 54.15977635s
	I1205 20:52:02.247348   46374 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:52:02.247360   46374 fix.go:54] fixHost starting: 
	I1205 20:52:02.247794   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:02.247830   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:02.265529   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I1205 20:52:02.265970   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:02.266457   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:52:02.266484   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:02.266825   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:02.267016   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:02.267185   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:52:02.268838   46374 fix.go:102] recreateIfNeeded on embed-certs-331495: state=Stopped err=<nil>
	I1205 20:52:02.268859   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	W1205 20:52:02.269010   46374 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:52:02.270658   46374 out.go:177] * Restarting existing kvm2 VM for "embed-certs-331495" ...
	I1205 20:52:00.114757   46700 retry.go:31] will retry after 2.136164682s: kubelet not initialised
	I1205 20:52:02.258242   46700 retry.go:31] will retry after 4.673214987s: kubelet not initialised
	I1205 20:52:01.474739   47365 provision.go:172] copyRemoteCerts
	I1205 20:52:01.474804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:01.474834   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.477249   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477632   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.477659   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477908   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.478119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.478313   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.478463   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:01.569617   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:01.594120   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 20:52:01.618066   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:52:01.643143   47365 provision.go:86] duration metric: configureAuth took 427.201784ms
	I1205 20:52:01.643169   47365 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:01.643353   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:01.643435   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.646320   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.646821   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.646881   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.647001   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.647206   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647407   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647555   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.647721   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.648105   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.648135   47365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:01.996428   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:01.996456   47365 machine.go:91] provisioned docker machine in 1.067968652s
	I1205 20:52:01.996468   47365 start.go:300] post-start starting for "default-k8s-diff-port-463614" (driver="kvm2")
	I1205 20:52:01.996482   47365 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:01.996502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:01.996804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:01.996829   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.999880   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000345   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.000378   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.000733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.000872   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.001041   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.088194   47365 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:02.092422   47365 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:02.092447   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:02.092522   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:02.092607   47365 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:02.092692   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:02.100847   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:02.125282   47365 start.go:303] post-start completed in 128.798422ms
	I1205 20:52:02.125308   47365 fix.go:56] fixHost completed within 20.234129302s
	I1205 20:52:02.125334   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.128159   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128506   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.128539   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.128970   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129157   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129330   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.129505   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:02.129980   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:02.130001   47365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:02.247134   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809522.185244520
	
	I1205 20:52:02.247160   47365 fix.go:206] guest clock: 1701809522.185244520
	I1205 20:52:02.247170   47365 fix.go:219] Guest: 2023-12-05 20:52:02.18524452 +0000 UTC Remote: 2023-12-05 20:52:02.125313647 +0000 UTC m=+165.907305797 (delta=59.930873ms)
	I1205 20:52:02.247193   47365 fix.go:190] guest clock delta is within tolerance: 59.930873ms
	I1205 20:52:02.247199   47365 start.go:83] releasing machines lock for "default-k8s-diff-port-463614", held for 20.356057608s
	I1205 20:52:02.247233   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.247561   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:02.250476   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.250918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.250952   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.251123   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.251833   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252026   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252117   47365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:02.252168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.252434   47365 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:02.252461   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.255221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255382   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.255750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.255949   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.256004   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.256060   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256278   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.256288   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256453   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256447   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.256586   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256698   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.343546   47365 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:02.368171   47365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:02.518472   47365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:02.524733   47365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:02.524808   47365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:02.541607   47365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:02.541632   47365 start.go:475] detecting cgroup driver to use...
	I1205 20:52:02.541703   47365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:02.560122   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:02.575179   47365 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:02.575244   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:02.591489   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:02.606022   47365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:02.711424   47365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:02.828436   47365 docker.go:219] disabling docker service ...
	I1205 20:52:02.828515   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:02.844209   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:02.860693   47365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:02.979799   47365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:03.111682   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:03.128706   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:03.147984   47365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:03.148057   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.160998   47365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:03.161068   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.173347   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.185126   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.195772   47365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:03.206308   47365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:03.215053   47365 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:03.215103   47365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:03.227755   47365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:03.237219   47365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:03.369712   47365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:03.561508   47365 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:03.561575   47365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:03.569369   47365 start.go:543] Will wait 60s for crictl version
	I1205 20:52:03.569437   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:52:03.575388   47365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:03.618355   47365 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:03.618458   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.670174   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.716011   47365 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:02.272006   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Start
	I1205 20:52:02.272171   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring networks are active...
	I1205 20:52:02.272890   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network default is active
	I1205 20:52:02.273264   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network mk-embed-certs-331495 is active
	I1205 20:52:02.273634   46374 main.go:141] libmachine: (embed-certs-331495) Getting domain xml...
	I1205 20:52:02.274223   46374 main.go:141] libmachine: (embed-certs-331495) Creating domain...
	I1205 20:52:03.644135   46374 main.go:141] libmachine: (embed-certs-331495) Waiting to get IP...
	I1205 20:52:03.645065   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.645451   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.645561   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.645439   48036 retry.go:31] will retry after 246.973389ms: waiting for machine to come up
	I1205 20:52:03.894137   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.894708   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.894813   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.894768   48036 retry.go:31] will retry after 353.753964ms: waiting for machine to come up
	I1205 20:52:04.250496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.251201   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.251231   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.251151   48036 retry.go:31] will retry after 370.705045ms: waiting for machine to come up
	I1205 20:52:04.623959   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.624532   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.624563   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.624488   48036 retry.go:31] will retry after 409.148704ms: waiting for machine to come up
	I1205 20:52:05.035991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.036492   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.036521   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.036458   48036 retry.go:31] will retry after 585.089935ms: waiting for machine to come up
	I1205 20:52:01.272757   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.865397348s)
	I1205 20:52:01.272791   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1205 20:52:01.272823   46866 cache_images.go:123] Successfully loaded all cached images
	I1205 20:52:01.272830   46866 cache_images.go:92] LoadImages completed in 17.860858219s
	I1205 20:52:01.272913   46866 ssh_runner.go:195] Run: crio config
	I1205 20:52:01.346651   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:01.346671   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:01.346689   46866 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:01.346715   46866 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.162 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143651 NodeName:no-preload-143651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:01.346890   46866 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143651"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:01.347005   46866 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:01.347080   46866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1205 20:52:01.360759   46866 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:01.360818   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:01.372537   46866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1205 20:52:01.389057   46866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1205 20:52:01.405689   46866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1205 20:52:01.426066   46866 ssh_runner.go:195] Run: grep 192.168.61.162	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:01.430363   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:01.443015   46866 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651 for IP: 192.168.61.162
	I1205 20:52:01.443049   46866 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:01.443202   46866 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:01.443254   46866 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:01.443337   46866 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.key
	I1205 20:52:01.443423   46866 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key.5bf94fca
	I1205 20:52:01.443477   46866 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key
	I1205 20:52:01.443626   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:01.443664   46866 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:01.443689   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:01.443729   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:01.443768   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:01.443800   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:01.443868   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:01.444505   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:01.471368   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:01.495925   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:01.520040   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:01.542515   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:01.565061   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:01.592011   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:01.615244   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:01.640425   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:01.666161   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:01.688991   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:01.711978   46866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:01.728642   46866 ssh_runner.go:195] Run: openssl version
	I1205 20:52:01.734248   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:01.746741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751589   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751647   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.757299   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:01.768280   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:01.779234   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783897   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783961   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.789668   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:01.800797   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:01.814741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819713   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819774   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.825538   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:01.836443   46866 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:01.842191   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:01.850025   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:01.857120   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:01.863507   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:01.870887   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:01.878657   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:01.886121   46866 kubeadm.go:404] StartCluster: {Name:no-preload-143651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:01.886245   46866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:01.886311   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:01.933026   46866 cri.go:89] found id: ""
	I1205 20:52:01.933096   46866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:01.946862   46866 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:01.946891   46866 kubeadm.go:636] restartCluster start
	I1205 20:52:01.946950   46866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:01.959468   46866 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.960467   46866 kubeconfig.go:92] found "no-preload-143651" server: "https://192.168.61.162:8443"
	I1205 20:52:01.962804   46866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:01.975351   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.975427   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:01.988408   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.988439   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.988493   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.001669   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:02.502716   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:02.502781   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.515220   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.002777   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.002843   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.016667   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.501748   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.501840   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.515761   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.001797   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.001873   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.018140   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.502697   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.502791   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.518059   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.002414   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.002515   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.021107   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.502637   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.502733   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.521380   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.717595   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:03.720774   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721210   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:03.721242   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721414   47365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:03.726330   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:03.738414   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:03.738479   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:03.777318   47365 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:03.777380   47365 ssh_runner.go:195] Run: which lz4
	I1205 20:52:03.781463   47365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:03.785728   47365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:03.785759   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:05.712791   47365 crio.go:444] Took 1.931355 seconds to copy over tarball
	I1205 20:52:05.712888   47365 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:06.939842   46700 retry.go:31] will retry after 8.345823287s: kubelet not initialised
	I1205 20:52:05.623348   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.623894   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.623928   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.623844   48036 retry.go:31] will retry after 819.796622ms: waiting for machine to come up
	I1205 20:52:06.445034   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:06.445471   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:06.445504   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:06.445427   48036 retry.go:31] will retry after 716.017152ms: waiting for machine to come up
	I1205 20:52:07.162965   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:07.163496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:07.163526   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:07.163445   48036 retry.go:31] will retry after 1.085415508s: waiting for machine to come up
	I1205 20:52:08.250373   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:08.250962   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:08.250999   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:08.250909   48036 retry.go:31] will retry after 1.128069986s: waiting for machine to come up
	I1205 20:52:09.380537   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:09.381001   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:09.381027   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:09.380964   48036 retry.go:31] will retry after 1.475239998s: waiting for machine to come up
	I1205 20:52:06.002168   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.002247   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.025123   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:06.502715   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.502831   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.519395   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.001937   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.002068   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.019028   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.501962   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.502059   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.515098   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.002769   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.002909   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.020137   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.501807   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.501949   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.518082   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.002421   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.002505   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.016089   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.502171   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.502261   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.515449   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.001975   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.002117   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.013831   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.502398   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.502481   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.514939   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.946250   47365 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.233316669s)
	I1205 20:52:08.946291   47365 crio.go:451] Took 3.233468 seconds to extract the tarball
	I1205 20:52:08.946304   47365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:08.988526   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:09.041782   47365 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:09.041812   47365 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:09.041908   47365 ssh_runner.go:195] Run: crio config
	I1205 20:52:09.105852   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:09.105879   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:09.105901   47365 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:09.105926   47365 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-463614 NodeName:default-k8s-diff-port-463614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:09.106114   47365 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-463614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:09.106218   47365 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-463614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1205 20:52:09.106295   47365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:09.116476   47365 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:09.116569   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:09.125304   47365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1205 20:52:09.141963   47365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:09.158882   47365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1205 20:52:09.177829   47365 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:09.181803   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:09.194791   47365 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614 for IP: 192.168.39.27
	I1205 20:52:09.194824   47365 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:09.194968   47365 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:09.195028   47365 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:09.195135   47365 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.key
	I1205 20:52:09.195225   47365 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key.310d49ea
	I1205 20:52:09.195287   47365 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key
	I1205 20:52:09.195457   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:09.195502   47365 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:09.195519   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:09.195561   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:09.195594   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:09.195625   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:09.195698   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:09.196495   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:09.221945   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:09.249557   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:09.279843   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:09.309602   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:09.338163   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:09.365034   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:09.394774   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:09.420786   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:09.445787   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:09.474838   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:09.499751   47365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:09.523805   47365 ssh_runner.go:195] Run: openssl version
	I1205 20:52:09.530143   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:09.545184   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550681   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550751   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.558670   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:09.573789   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:09.585134   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591055   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591136   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.597286   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:09.608901   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:09.620949   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626190   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626267   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.632394   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:09.645362   47365 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:09.650768   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:09.657084   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:09.663183   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:09.669093   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:09.675365   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:09.681992   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:09.688849   47365 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:09.688963   47365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:09.689035   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:09.730999   47365 cri.go:89] found id: ""
	I1205 20:52:09.731061   47365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:09.741609   47365 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:09.741640   47365 kubeadm.go:636] restartCluster start
	I1205 20:52:09.741700   47365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:09.751658   47365 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.752671   47365 kubeconfig.go:92] found "default-k8s-diff-port-463614" server: "https://192.168.39.27:8444"
	I1205 20:52:09.755361   47365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:09.765922   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.766006   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.781956   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.781983   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.782033   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.795265   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.295986   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.296088   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.312309   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.795832   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.795959   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.808880   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.857552   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:10.857968   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:10.858002   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:10.857911   48036 retry.go:31] will retry after 1.882319488s: waiting for machine to come up
	I1205 20:52:12.741608   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:12.742051   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:12.742081   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:12.742006   48036 retry.go:31] will retry after 2.598691975s: waiting for machine to come up
	I1205 20:52:15.343818   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:15.344360   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:15.344385   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:15.344306   48036 retry.go:31] will retry after 3.313897625s: waiting for machine to come up
	I1205 20:52:11.002661   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.002740   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.014931   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.502548   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.502621   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.516090   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.975668   46866 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:11.975724   46866 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:11.975739   46866 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:11.975820   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:12.032265   46866 cri.go:89] found id: ""
	I1205 20:52:12.032364   46866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:12.050705   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:12.060629   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:12.060726   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.073988   46866 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.074015   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:12.209842   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.318235   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108353469s)
	I1205 20:52:13.318280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.518224   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.606064   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.695764   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:13.695849   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:13.718394   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.237554   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.737066   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:15.236911   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:11.295662   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.295754   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.308889   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.796322   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.796432   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.812351   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.295433   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.295527   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.308482   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.795889   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.795961   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.812458   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.296017   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.296114   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.312758   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.796111   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.796256   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.812247   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.295726   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.295808   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.308712   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.796358   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.796439   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.813173   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.295541   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.295632   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.312665   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.796231   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.796378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.816767   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.292395   46700 retry.go:31] will retry after 12.309806949s: kubelet not initialised
	I1205 20:52:18.659431   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:18.659915   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:18.659944   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:18.659867   48036 retry.go:31] will retry after 3.672641091s: waiting for machine to come up
	I1205 20:52:15.737064   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.237656   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.263010   46866 api_server.go:72] duration metric: took 2.567245952s to wait for apiserver process to appear ...
	I1205 20:52:16.263039   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:16.263057   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.286115   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.286153   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.286173   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.334683   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.334710   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.835110   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.840833   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:19.840866   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.335444   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.355923   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:20.355956   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.835568   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.840974   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:52:20.849239   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:52:20.849274   46866 api_server.go:131] duration metric: took 4.586226618s to wait for apiserver health ...
	I1205 20:52:20.849284   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:20.849323   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:20.850829   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:16.295650   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.295729   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.312742   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:16.796283   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.796364   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.812822   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.295879   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.295953   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.312254   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.795437   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.795519   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.808598   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.296187   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.296266   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.312808   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.796368   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.796480   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.812986   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.295511   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:19.295576   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:19.308830   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.766569   47365 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:19.766653   47365 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:19.766673   47365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:19.766748   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:19.820510   47365 cri.go:89] found id: ""
	I1205 20:52:19.820590   47365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:19.842229   47365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:19.853234   47365 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:19.853293   47365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866181   47365 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866220   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:20.022098   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.165439   47365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143295704s)
	I1205 20:52:21.165472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:22.333575   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334146   46374 main.go:141] libmachine: (embed-certs-331495) Found IP for machine: 192.168.72.180
	I1205 20:52:22.334189   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has current primary IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334205   46374 main.go:141] libmachine: (embed-certs-331495) Reserving static IP address...
	I1205 20:52:22.334654   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.334686   46374 main.go:141] libmachine: (embed-certs-331495) DBG | skip adding static IP to network mk-embed-certs-331495 - found existing host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"}
	I1205 20:52:22.334699   46374 main.go:141] libmachine: (embed-certs-331495) Reserved static IP address: 192.168.72.180
	I1205 20:52:22.334717   46374 main.go:141] libmachine: (embed-certs-331495) Waiting for SSH to be available...
	I1205 20:52:22.334727   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Getting to WaitForSSH function...
	I1205 20:52:22.337411   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337832   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.337863   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337976   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH client type: external
	I1205 20:52:22.338005   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa (-rw-------)
	I1205 20:52:22.338038   46374 main.go:141] libmachine: (embed-certs-331495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:22.338057   46374 main.go:141] libmachine: (embed-certs-331495) DBG | About to run SSH command:
	I1205 20:52:22.338071   46374 main.go:141] libmachine: (embed-certs-331495) DBG | exit 0
	I1205 20:52:22.430984   46374 main.go:141] libmachine: (embed-certs-331495) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:22.431374   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetConfigRaw
	I1205 20:52:22.432120   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.435317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.435737   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.435772   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.436044   46374 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/config.json ...
	I1205 20:52:22.436283   46374 machine.go:88] provisioning docker machine ...
	I1205 20:52:22.436304   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:22.436519   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436687   46374 buildroot.go:166] provisioning hostname "embed-certs-331495"
	I1205 20:52:22.436707   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436882   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.439595   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.439966   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.439998   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.440179   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.440392   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440558   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440718   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.440891   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.441216   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.441235   46374 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-331495 && echo "embed-certs-331495" | sudo tee /etc/hostname
	I1205 20:52:22.584600   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-331495
	
	I1205 20:52:22.584662   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.587640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588053   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.588083   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588255   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.588469   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.588985   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.589340   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.589369   46374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-331495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-331495/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-331495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:22.722352   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:22.722390   46374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:22.722437   46374 buildroot.go:174] setting up certificates
	I1205 20:52:22.722459   46374 provision.go:83] configureAuth start
	I1205 20:52:22.722475   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.722776   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.725826   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726254   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.726313   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726616   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.729267   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729606   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.729640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729798   46374 provision.go:138] copyHostCerts
	I1205 20:52:22.729843   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:22.729853   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:22.729907   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:22.729986   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:22.729994   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:22.730019   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:22.730090   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:22.730100   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:22.730128   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:22.730188   46374 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.embed-certs-331495 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-331495]
	I1205 20:52:22.795361   46374 provision.go:172] copyRemoteCerts
	I1205 20:52:22.795435   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:22.795464   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.798629   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799006   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.799052   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799222   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.799448   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.799617   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.799774   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:22.892255   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:52:22.929940   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:52:22.966087   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:22.998887   46374 provision.go:86] duration metric: configureAuth took 276.409362ms
	I1205 20:52:22.998937   46374 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:22.999160   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:22.999253   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.002604   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.002992   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.003033   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.003265   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.003516   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003723   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003916   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.004090   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.004540   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.004568   46374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:23.371418   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:23.371450   46374 machine.go:91] provisioned docker machine in 935.149228ms
	I1205 20:52:23.371464   46374 start.go:300] post-start starting for "embed-certs-331495" (driver="kvm2")
	I1205 20:52:23.371477   46374 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:23.371500   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.371872   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:23.371911   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.375440   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.375960   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.375991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.376130   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.376328   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.376512   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.376693   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.472304   46374 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:23.477044   46374 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:23.477070   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:23.477177   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:23.477287   46374 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:23.477425   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:23.493987   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:23.519048   46374 start.go:303] post-start completed in 147.566985ms
	I1205 20:52:23.519082   46374 fix.go:56] fixHost completed within 21.27172194s
	I1205 20:52:23.519107   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.522260   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522700   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.522735   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522967   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.523238   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523456   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.523893   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.524220   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.524239   46374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:23.648717   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809543.591713401
	
	I1205 20:52:23.648743   46374 fix.go:206] guest clock: 1701809543.591713401
	I1205 20:52:23.648755   46374 fix.go:219] Guest: 2023-12-05 20:52:23.591713401 +0000 UTC Remote: 2023-12-05 20:52:23.519087629 +0000 UTC m=+358.020977056 (delta=72.625772ms)
	I1205 20:52:23.648800   46374 fix.go:190] guest clock delta is within tolerance: 72.625772ms
	I1205 20:52:23.648808   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 21.401495157s
	I1205 20:52:23.648838   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.649149   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:23.652098   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652534   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.652577   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652773   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653350   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653552   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653655   46374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:23.653709   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.653948   46374 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:23.653989   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.657266   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657547   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657637   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657669   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657946   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657957   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.657970   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.658236   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.658250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658438   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658532   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658756   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.658785   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658933   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.777965   46374 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:23.784199   46374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:23.948621   46374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:23.957081   46374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:23.957163   46374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:23.978991   46374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:23.979023   46374 start.go:475] detecting cgroup driver to use...
	I1205 20:52:23.979124   46374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:23.997195   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:24.015420   46374 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:24.015494   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:24.031407   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:24.047587   46374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:24.200996   46374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:24.332015   46374 docker.go:219] disabling docker service ...
	I1205 20:52:24.332095   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:24.350586   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:24.367457   46374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:24.545467   46374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:24.733692   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:24.748391   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:24.768555   46374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:24.768644   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.780668   46374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:24.780740   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.792671   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.806500   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.818442   46374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:24.829822   46374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:24.842070   46374 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:24.842138   46374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:24.857370   46374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:24.867993   46374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:25.024629   46374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:25.231556   46374 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:25.231630   46374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:25.237863   46374 start.go:543] Will wait 60s for crictl version
	I1205 20:52:25.237929   46374 ssh_runner.go:195] Run: which crictl
	I1205 20:52:25.242501   46374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:25.289507   46374 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:25.289591   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.340432   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.398354   46374 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:25.399701   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:25.402614   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.402997   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:25.403029   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.403259   46374 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:25.407873   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:25.420725   46374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:25.420801   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:25.468651   46374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:25.468726   46374 ssh_runner.go:195] Run: which lz4
	I1205 20:52:25.473976   46374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:25.478835   46374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:25.478871   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:20.852220   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:20.867614   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:20.892008   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:20.912985   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:20.913027   46866 system_pods.go:61] "coredns-76f75df574-8d24t" [10265d3b-ddf0-4559-8194-d42563df88a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:20.913038   46866 system_pods.go:61] "etcd-no-preload-143651" [a6b62f23-a944-41ec-b465-6027fcf1f413] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:20.913051   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [5a6b5874-6c6b-4ed6-aa68-8e7fc35a486e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:20.913061   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [42b01d8c-2d8f-467e-8183-eef2e6f73b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:20.913074   46866 system_pods.go:61] "kube-proxy-mltvl" [9adea5d0-e824-40ff-b5b4-16f84fd439ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:20.913085   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [17474fca-8390-48db-bebe-47c1e2cf7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:20.913107   46866 system_pods.go:61] "metrics-server-57f55c9bc5-mhxpn" [3eb25a58-bea3-4266-9bf8-8f186ee65e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:20.913120   46866 system_pods.go:61] "storage-provisioner" [cfe9d24c-a534-4778-980b-99f7addcf0b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:20.913132   46866 system_pods.go:74] duration metric: took 21.101691ms to wait for pod list to return data ...
	I1205 20:52:20.913143   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:20.917108   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:20.917140   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:20.917156   46866 node_conditions.go:105] duration metric: took 4.003994ms to run NodePressure ...
	I1205 20:52:20.917180   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.315507   46866 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321271   46866 kubeadm.go:787] kubelet initialised
	I1205 20:52:21.321301   46866 kubeadm.go:788] duration metric: took 5.763416ms waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321310   46866 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:21.327760   46866 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:23.354192   46866 pod_ready.go:102] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:25.353274   46866 pod_ready.go:92] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:25.353356   46866 pod_ready.go:81] duration metric: took 4.02555842s waiting for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:25.353372   46866 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:21.402472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.498902   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.585971   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:21.586073   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:21.605993   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.120378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.620326   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.119466   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.619549   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.120228   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.143130   47365 api_server.go:72] duration metric: took 2.557157382s to wait for apiserver process to appear ...
	I1205 20:52:24.143163   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:24.143182   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:27.608165   46700 retry.go:31] will retry after 7.717398196s: kubelet not initialised
	I1205 20:52:28.335417   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:28.335446   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:28.335457   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.429478   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.429507   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:28.929996   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.936475   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.936525   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.430308   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.437787   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:29.437838   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.930326   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.942625   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:52:29.953842   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:29.953875   47365 api_server.go:131] duration metric: took 5.810704359s to wait for apiserver health ...
	I1205 20:52:29.953889   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:29.953904   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:29.955505   47365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:27.326223   46374 crio.go:444] Took 1.852284 seconds to copy over tarball
	I1205 20:52:27.326333   46374 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:27.374784   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:29.378733   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:30.375181   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:30.375266   46866 pod_ready.go:81] duration metric: took 5.021883955s waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.375316   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:29.956914   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:29.981391   47365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:30.016634   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:30.030957   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:30.031030   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:30.031047   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:30.031069   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:30.031088   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:30.031117   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:30.031135   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:30.031148   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:30.031165   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:30.031177   47365 system_pods.go:74] duration metric: took 14.513879ms to wait for pod list to return data ...
	I1205 20:52:30.031190   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:30.035458   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:30.035493   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:30.035506   47365 node_conditions.go:105] duration metric: took 4.295594ms to run NodePressure ...
	I1205 20:52:30.035525   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:30.302125   47365 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307852   47365 kubeadm.go:787] kubelet initialised
	I1205 20:52:30.307875   47365 kubeadm.go:788] duration metric: took 5.724991ms waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307883   47365 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:30.316621   47365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.323682   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323716   47365 pod_ready.go:81] duration metric: took 7.060042ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.323728   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323736   47365 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.338909   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338945   47365 pod_ready.go:81] duration metric: took 15.198541ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.338967   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338977   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.349461   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349491   47365 pod_ready.go:81] duration metric: took 10.504515ms waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.349505   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349513   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.422520   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422553   47365 pod_ready.go:81] duration metric: took 73.030993ms waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.422569   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422588   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.212527   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212553   47365 pod_ready.go:81] duration metric: took 789.956497ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.212564   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212575   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.727110   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727140   47365 pod_ready.go:81] duration metric: took 514.553589ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.727154   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727162   47365 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.168658   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168695   47365 pod_ready.go:81] duration metric: took 441.52358ms waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:32.168711   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168720   47365 pod_ready.go:38] duration metric: took 1.860826751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:32.168747   47365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:52:32.182053   47365 ops.go:34] apiserver oom_adj: -16
	I1205 20:52:32.182075   47365 kubeadm.go:640] restartCluster took 22.440428452s
	I1205 20:52:32.182083   47365 kubeadm.go:406] StartCluster complete in 22.493245354s
	I1205 20:52:32.182130   47365 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.182208   47365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:52:32.184035   47365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.290773   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:52:32.290931   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:32.290921   47365 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:52:32.291055   47365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291079   47365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291088   47365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291099   47365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-463614"
	I1205 20:52:32.291123   47365 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291133   47365 addons.go:240] addon metrics-server should already be in state true
	I1205 20:52:32.291177   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291093   47365 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291220   47365 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:52:32.291298   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291586   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291607   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291633   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291635   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291713   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291739   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.311298   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I1205 20:52:32.311514   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I1205 20:52:32.311541   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1205 20:52:32.311733   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.311932   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312026   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312291   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312325   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312434   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312456   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312487   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312501   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312688   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312763   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312833   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.313276   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313300   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.313359   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313390   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.316473   47365 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.316493   47365 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:52:32.316520   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.317093   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.317125   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.328598   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I1205 20:52:32.329097   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.329225   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I1205 20:52:32.329589   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.329608   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.329674   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.330230   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.330248   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.330298   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330484   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330553   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330719   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330908   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I1205 20:52:32.331201   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.331935   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.331953   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.332351   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.332472   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.332653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.512055   47365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:52:32.333098   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.511993   47365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:52:32.536814   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:52:32.512201   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.536942   47365 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.536958   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:52:32.536985   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.536843   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:52:32.537043   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.541412   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541780   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541924   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.541958   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542190   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542369   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.542394   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542434   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.542641   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.542748   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542905   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.542939   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.543088   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.543246   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.554014   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I1205 20:52:32.554513   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.554975   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.555007   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.555387   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.555634   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.557606   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.557895   47365 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.557911   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:52:32.557936   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.561075   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.561553   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.561942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.562135   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.562338   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.673513   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.682442   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:52:32.682472   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:52:32.706007   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.726379   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:52:32.726413   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:52:32.779247   47365 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:52:32.780175   47365 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-463614" context rescaled to 1 replicas
	I1205 20:52:32.780220   47365 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:52:32.787518   47365 out.go:177] * Verifying Kubernetes components...
	I1205 20:52:32.790046   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:32.796219   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:32.796248   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:52:32.854438   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:34.594203   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.920648219s)
	I1205 20:52:34.594267   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594294   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.888240954s)
	I1205 20:52:34.594331   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594343   47365 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.80425984s)
	I1205 20:52:34.594373   47365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:34.594350   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594710   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594729   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.594755   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594772   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594783   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594801   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594754   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594860   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.595134   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595195   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595229   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595238   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.595356   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595375   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.610358   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.610390   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.610651   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.610677   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689242   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.834763203s)
	I1205 20:52:34.689294   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689309   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.689648   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.689698   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.689717   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689740   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.690020   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.690025   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.690035   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.690046   47365 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-463614"
	I1205 20:52:34.692072   47365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 20:52:30.639619   46374 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.313251826s)
	I1205 20:52:30.641314   46374 crio.go:451] Took 3.315054 seconds to extract the tarball
	I1205 20:52:30.641328   46374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:30.687076   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:30.745580   46374 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:30.745603   46374 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:30.745681   46374 ssh_runner.go:195] Run: crio config
	I1205 20:52:30.807631   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:30.807656   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:30.807674   46374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:30.807692   46374 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-331495 NodeName:embed-certs-331495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:30.807828   46374 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-331495"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:30.807897   46374 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-331495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:30.807958   46374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:30.820571   46374 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:30.820679   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:30.831881   46374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:52:30.852058   46374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:30.870516   46374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1205 20:52:30.888000   46374 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:30.892529   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:30.904910   46374 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495 for IP: 192.168.72.180
	I1205 20:52:30.904950   46374 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:30.905143   46374 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:30.905197   46374 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:30.905280   46374 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/client.key
	I1205 20:52:30.905336   46374 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key.379caec1
	I1205 20:52:30.905368   46374 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key
	I1205 20:52:30.905463   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:30.905489   46374 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:30.905499   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:30.905525   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:30.905550   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:30.905572   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:30.905609   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:30.906129   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:30.930322   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:30.953120   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:30.976792   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:31.000462   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:31.025329   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:31.050451   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:31.075644   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:31.101693   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:31.125712   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:31.149721   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:31.173466   46374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:31.191836   46374 ssh_runner.go:195] Run: openssl version
	I1205 20:52:31.197909   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:31.212206   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219081   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219155   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.225423   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:31.239490   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:31.251505   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256613   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256678   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.262730   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:31.274879   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:31.286201   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291593   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291658   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.298904   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:31.310560   46374 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:31.315670   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:31.322461   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:31.328590   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:31.334580   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:31.341827   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:31.348456   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:31.354835   46374 kubeadm.go:404] StartCluster: {Name:embed-certs-331495 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:31.354945   46374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:31.355024   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:31.396272   46374 cri.go:89] found id: ""
	I1205 20:52:31.396346   46374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:31.406603   46374 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:31.406629   46374 kubeadm.go:636] restartCluster start
	I1205 20:52:31.406683   46374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:31.417671   46374 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.419068   46374 kubeconfig.go:92] found "embed-certs-331495" server: "https://192.168.72.180:8443"
	I1205 20:52:31.421304   46374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:31.432188   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.432260   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.445105   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.445132   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.445182   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.457857   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.958205   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.958322   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.972477   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.458645   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.458732   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.475471   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.958778   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.958872   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.973340   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.458838   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.458924   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.475090   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.958680   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.958776   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.974789   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.458297   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.458371   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.471437   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.958961   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.959030   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.972007   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:35.458648   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.458729   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.471573   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.362684   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.362706   46866 pod_ready.go:81] duration metric: took 1.98737949s waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.362715   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368694   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.368717   46866 pod_ready.go:81] duration metric: took 5.993796ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368726   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375418   46866 pod_ready.go:92] pod "kube-proxy-mltvl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.375442   46866 pod_ready.go:81] duration metric: took 6.709035ms waiting for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375452   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383393   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.383418   46866 pod_ready.go:81] duration metric: took 7.957397ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383430   46866 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:34.497914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:34.693693   47365 addons.go:502] enable addons completed in 2.40279745s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 20:52:35.331317   46700 retry.go:31] will retry after 13.122920853s: kubelet not initialised
	I1205 20:52:35.958930   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.959020   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.971607   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.458135   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.458202   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.475097   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.958621   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.958703   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.974599   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.458670   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.458790   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.472296   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.958470   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.958561   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.971241   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.458862   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.458957   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.471475   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.958727   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.958807   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.971366   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.458991   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.459084   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.471352   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.958955   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.959052   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.972803   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:40.458181   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.458251   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.470708   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.499335   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:38.996779   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:36.611450   47365 node_ready.go:58] node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:39.111234   47365 node_ready.go:49] node "default-k8s-diff-port-463614" has status "Ready":"True"
	I1205 20:52:39.111266   47365 node_ready.go:38] duration metric: took 4.51686489s waiting for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:39.111278   47365 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:39.117815   47365 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124431   47365 pod_ready.go:92] pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.124455   47365 pod_ready.go:81] duration metric: took 6.615213ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124464   47365 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131301   47365 pod_ready.go:92] pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.131340   47365 pod_ready.go:81] duration metric: took 6.85604ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131352   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:41.155265   47365 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:40.958830   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.958921   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.970510   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:41.432806   46374 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:41.432840   46374 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:41.432854   46374 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:41.432909   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:41.476486   46374 cri.go:89] found id: ""
	I1205 20:52:41.476550   46374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:41.493676   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:41.503594   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:41.503681   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512522   46374 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512550   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:41.645081   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.368430   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.586289   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.657555   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.753020   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:42.753103   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:42.767926   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.286111   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.786148   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.285601   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.785638   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.285508   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.326812   46374 api_server.go:72] duration metric: took 2.573794156s to wait for apiserver process to appear ...
	I1205 20:52:45.326839   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:45.326857   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327337   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:45.327367   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327771   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:40.998702   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:43.508882   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:42.152898   47365 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:42.152926   47365 pod_ready.go:81] duration metric: took 3.021552509s waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:42.152939   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320531   47365 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.320632   47365 pod_ready.go:81] duration metric: took 1.167680941s waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320660   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521255   47365 pod_ready.go:92] pod "kube-proxy-g4zct" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.521286   47365 pod_ready.go:81] duration metric: took 200.606753ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521300   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911946   47365 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.911972   47365 pod_ready.go:81] duration metric: took 390.664131ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911983   47365 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:46.220630   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.459426   46700 kubeadm.go:787] kubelet initialised
	I1205 20:52:48.459452   46700 kubeadm.go:788] duration metric: took 53.977281861s waiting for restarted kubelet to initialise ...
	I1205 20:52:48.459460   46700 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:48.465332   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471155   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.471184   46700 pod_ready.go:81] duration metric: took 5.815983ms waiting for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471195   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476833   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.476861   46700 pod_ready.go:81] duration metric: took 5.658311ms waiting for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476876   46700 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481189   46700 pod_ready.go:92] pod "etcd-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.481217   46700 pod_ready.go:81] duration metric: took 4.332284ms waiting for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481230   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485852   46700 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.485869   46700 pod_ready.go:81] duration metric: took 4.630813ms waiting for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485879   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:45.828213   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.185115   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.185143   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.185156   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.228977   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.229017   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.328278   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.336930   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.336971   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:49.828530   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.835188   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.835215   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:50.328834   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.337852   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:50.337885   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:45.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:47.998466   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.497317   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.828313   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.835050   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:52:50.844093   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:50.844124   46374 api_server.go:131] duration metric: took 5.517278039s to wait for apiserver health ...
	I1205 20:52:50.844134   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:50.844141   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:50.846047   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:48.220942   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.720446   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.858954   46700 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.858980   46700 pod_ready.go:81] duration metric: took 373.093905ms waiting for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.858989   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260468   46700 pod_ready.go:92] pod "kube-proxy-r5n6g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.260493   46700 pod_ready.go:81] duration metric: took 401.497792ms waiting for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260501   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658952   46700 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.658977   46700 pod_ready.go:81] duration metric: took 398.469864ms waiting for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658986   46700 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:51.966947   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.848285   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:50.865469   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:50.918755   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:50.951671   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:50.951705   46374 system_pods.go:61] "coredns-5dd5756b68-7xr6w" [8300dbf8-413a-4171-9e56-53f0f2d03fd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:50.951712   46374 system_pods.go:61] "etcd-embed-certs-331495" [b2802bcb-262e-4d2a-9589-b1b3885de515] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:50.951722   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [6f9a28a7-8827-4071-8c68-f2671e7a8017] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:50.951738   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [24e85887-7f58-4a5c-b0d4-4eebd6076a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:50.951744   46374 system_pods.go:61] "kube-proxy-76qq2" [ffd744ec-9522-443c-b609-b11e24ab9b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:50.951750   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [aaa502dc-a7cf-4f76-b79f-aa8be1ae48f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:50.951756   46374 system_pods.go:61] "metrics-server-57f55c9bc5-bcg28" [e60503c2-732d-44a3-b5da-fbf7a0cfd981] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:50.951761   46374 system_pods.go:61] "storage-provisioner" [be1aa61b-82e9-4382-ab1c-89e30b801fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:50.951767   46374 system_pods.go:74] duration metric: took 32.973877ms to wait for pod list to return data ...
	I1205 20:52:50.951773   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:50.971413   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:50.971440   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:50.971449   46374 node_conditions.go:105] duration metric: took 19.672668ms to run NodePressure ...
	I1205 20:52:50.971465   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:51.378211   46374 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383462   46374 kubeadm.go:787] kubelet initialised
	I1205 20:52:51.383487   46374 kubeadm.go:788] duration metric: took 5.246601ms waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383495   46374 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:51.393558   46374 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:53.414801   46374 pod_ready.go:102] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.426681   46374 pod_ready.go:92] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:55.426710   46374 pod_ready.go:81] duration metric: took 4.033124274s waiting for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:55.426725   46374 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:52.498509   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.997539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:53.221825   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.723682   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.468896   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:56.966471   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.468158   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.469797   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.497582   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.500937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.727756   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.727968   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.466541   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469387   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469996   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.968435   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.969033   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.969065   46374 pod_ready.go:81] duration metric: took 9.542324599s waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.969073   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975019   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.975041   46374 pod_ready.go:81] duration metric: took 5.961268ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975049   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980743   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.980771   46374 pod_ready.go:81] duration metric: took 5.713974ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980779   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985565   46374 pod_ready.go:92] pod "kube-proxy-76qq2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.985596   46374 pod_ready.go:81] duration metric: took 4.805427ms waiting for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985610   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992009   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.992035   46374 pod_ready.go:81] duration metric: took 6.416324ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992047   46374 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:01.996877   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.997311   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:02.221319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.720314   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.966830   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.465943   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:07.272848   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.272897   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:05.997810   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.497408   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.722608   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.222226   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.965894   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.967253   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.466458   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.773608   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.773778   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.997547   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:12.999476   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.496736   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.721128   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.721371   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.221780   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.466602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.965160   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.272951   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.772527   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.497284   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.498006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.223073   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.724402   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.966424   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.466866   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.772789   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.273369   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:21.997270   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.496150   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:23.221999   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.223587   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.967755   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.465568   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.772596   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:30.273464   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:26.496470   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.003099   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.721654   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.724134   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.466332   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.966465   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.773521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:35.272236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.497006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.000663   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.221725   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.719806   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.466035   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.966501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:37.773436   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.274255   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.496949   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.996265   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.721339   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.723854   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.221087   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:39.465585   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.465785   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.467239   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:42.773263   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:44.773717   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.998588   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.496904   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.497783   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.222148   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.722122   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.966317   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.966572   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.272412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:49.273057   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.997444   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.496708   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.722350   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.219843   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.467523   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.967357   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:51.773424   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:53.775574   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.499839   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.997448   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.222442   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.719693   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:55.466751   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:57.966602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.271805   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.272923   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.273306   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.998244   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:59.498440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.720684   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.729688   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.220861   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.466162   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.966846   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.773903   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.271747   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.995748   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:04.002522   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:03.723212   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.224289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.465907   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.466264   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.272960   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.274281   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.497442   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.997440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.721146   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:10.724743   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.966368   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.966796   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.772305   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.772470   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.496229   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.497913   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.221912   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.722076   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:14.467708   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:16.965932   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.773481   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:17.774552   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.273733   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.998027   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.496453   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.497053   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.223289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.722234   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.966869   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:21.465921   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:23.466328   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.272550   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.497084   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:24.498177   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.727882   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.221485   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.966388   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.466553   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.772616   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.773188   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:26.997209   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.997776   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.721711   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.722528   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:30.964854   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.966383   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.272612   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.275600   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:31.498601   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:33.997450   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.220641   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.222232   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.476491   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.968512   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.772248   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.272991   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.997574   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.999016   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.501116   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.723179   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.220182   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.469607   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.968860   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.274044   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.502208   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:44.997516   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.720811   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.721757   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.725689   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.466766   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.966704   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.773511   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.273161   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.274031   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.497342   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:49.502501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.223549   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.719890   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.465849   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.466157   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.772748   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:55.272781   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:51.997636   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.499333   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.720512   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.721826   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.466519   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.466580   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.274370   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.774179   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.997654   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.497915   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.221713   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.723015   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:58.965289   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:00.966027   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.967557   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.273349   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.773101   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.996491   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:03.996649   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.723123   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.220986   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.224736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.466592   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.966611   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.773180   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.774008   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.997589   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.998076   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.001226   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.720517   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.221172   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.466096   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.467200   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.272981   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.773210   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.496043   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.497518   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.725751   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.219939   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.966795   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:17.466501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.272578   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.273500   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.997861   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.499434   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.221058   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.720978   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.466641   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.965389   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.772109   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.274633   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.997800   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:24.497501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.220292   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.722738   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.966366   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.966799   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.465341   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.773108   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:27.774236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.274971   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:26.997610   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.997753   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.220185   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.220399   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.466026   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.966220   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.772859   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:35.272898   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:31.497899   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:33.500772   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.220696   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.221098   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.222701   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.966787   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.465676   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.775190   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.272006   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.000539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.497044   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.720509   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.730400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:39.468063   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:41.966415   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:42.276412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.772916   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.996937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.496928   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.220575   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.724283   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.465646   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.467000   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.773090   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.273675   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.997477   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:47.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.998126   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.220758   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:50.720911   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.966711   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.468554   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.773277   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:52.501489   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:54.996998   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.221047   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.221493   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.965841   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.965891   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.465977   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.272446   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.772269   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.997565   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.496443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:57.722571   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.724736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.466069   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.966747   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.772715   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.271368   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.274084   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:01.498102   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.498428   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.220645   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.720012   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.966850   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.467719   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.772997   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.273279   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.998642   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:08.001018   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.496939   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:06.721938   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.219709   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:11.220579   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.968249   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.465039   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.773538   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.272696   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.500855   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.996837   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:13.725252   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.725522   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.465989   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:16.966908   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.273749   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.772650   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.496107   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.496914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:18.224365   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:20.720429   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.465513   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.967092   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.775353   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:24.277586   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.498047   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.999733   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.219319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:25.222340   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.967374   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.465973   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.468481   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.772514   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.774642   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.496794   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.498446   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:27.723499   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.222748   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.965650   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.967183   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.777450   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:33.276381   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.999443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.384081   46866 pod_ready.go:81] duration metric: took 4m0.000635015s waiting for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:32.384115   46866 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:32.384132   46866 pod_ready.go:38] duration metric: took 4m11.062812404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:32.384156   46866 kubeadm.go:640] restartCluster took 4m30.437260197s
	W1205 20:56:32.384250   46866 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:32.384280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:32.721610   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.220186   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.467452   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.966451   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.773516   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.773737   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.273185   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.221794   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:39.722400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.466005   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.467531   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.773790   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:45.272396   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:41.722481   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.734080   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.912982   47365 pod_ready.go:81] duration metric: took 4m0.000982583s waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:43.913024   47365 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:43.913038   47365 pod_ready.go:38] duration metric: took 4m4.801748698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:43.913063   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:56:43.913101   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:43.913175   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:43.965196   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:43.965220   47365 cri.go:89] found id: ""
	I1205 20:56:43.965228   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:43.965272   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:43.970257   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:43.970353   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:44.026974   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.027005   47365 cri.go:89] found id: ""
	I1205 20:56:44.027015   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:44.027099   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.032107   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:44.032212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:44.075721   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:44.075758   47365 cri.go:89] found id: ""
	I1205 20:56:44.075766   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:44.075823   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.082125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:44.082212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:44.125099   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:44.125122   47365 cri.go:89] found id: ""
	I1205 20:56:44.125129   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:44.125171   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.129477   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:44.129538   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:44.180281   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.180305   47365 cri.go:89] found id: ""
	I1205 20:56:44.180313   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:44.180357   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.185094   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:44.185173   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:44.228693   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.228719   47365 cri.go:89] found id: ""
	I1205 20:56:44.228730   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:44.228786   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.233574   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:44.233687   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:44.279286   47365 cri.go:89] found id: ""
	I1205 20:56:44.279312   47365 logs.go:284] 0 containers: []
	W1205 20:56:44.279321   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:44.279328   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:44.279390   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:44.333572   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.333598   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:44.333605   47365 cri.go:89] found id: ""
	I1205 20:56:44.333614   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:44.333678   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.339080   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.343653   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:44.343687   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.412744   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:44.412785   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.457374   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:44.457402   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.521640   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:44.521676   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:44.536612   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:44.536636   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.586795   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:44.586836   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:45.065254   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:45.065293   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:45.126209   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:45.126242   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:45.166553   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:45.166580   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:45.214849   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:45.214887   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:45.371687   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:45.371732   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:45.417585   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:45.417615   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:45.455524   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:45.455559   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:44.965462   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.967433   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:47.272958   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.274398   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.621173   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.236869123s)
	I1205 20:56:46.621264   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:46.636086   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:46.647003   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:46.657201   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:46.657241   46866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:56:46.882231   46866 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:48.007463   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:56:48.023675   47365 api_server.go:72] duration metric: took 4m15.243410399s to wait for apiserver process to appear ...
	I1205 20:56:48.023713   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:56:48.023748   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:48.023818   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:48.067278   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.067301   47365 cri.go:89] found id: ""
	I1205 20:56:48.067308   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:48.067359   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.072370   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:48.072446   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:48.118421   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:48.118444   47365 cri.go:89] found id: ""
	I1205 20:56:48.118453   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:48.118509   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.123954   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:48.124019   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:48.173864   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:48.173890   47365 cri.go:89] found id: ""
	I1205 20:56:48.173900   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:48.173955   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.178717   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:48.178790   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:48.221891   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:48.221915   47365 cri.go:89] found id: ""
	I1205 20:56:48.221924   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:48.221985   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.226811   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:48.226886   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:48.271431   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:48.271454   47365 cri.go:89] found id: ""
	I1205 20:56:48.271463   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:48.271518   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.276572   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:48.276655   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:48.326438   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:48.326466   47365 cri.go:89] found id: ""
	I1205 20:56:48.326476   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:48.326534   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.334539   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:48.334611   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:48.377929   47365 cri.go:89] found id: ""
	I1205 20:56:48.377955   47365 logs.go:284] 0 containers: []
	W1205 20:56:48.377965   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:48.377973   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:48.378035   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:48.430599   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:48.430621   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:48.430629   47365 cri.go:89] found id: ""
	I1205 20:56:48.430638   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:48.430691   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.434882   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.439269   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:48.439299   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.495069   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:48.495113   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:48.955220   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:48.955257   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:48.971222   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:48.971246   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:49.108437   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:49.108470   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:49.150916   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:49.150940   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:49.207092   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:49.207141   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:49.251940   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:49.251969   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:49.293885   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:49.293918   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:49.349151   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:49.349187   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:49.403042   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:49.403079   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:49.466816   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:49.466858   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:49.525300   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:49.525341   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:49.467873   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.659950   46700 pod_ready.go:81] duration metric: took 4m0.000950283s waiting for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:49.659985   46700 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:49.660008   46700 pod_ready.go:38] duration metric: took 4m1.200539602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:49.660056   46700 kubeadm.go:640] restartCluster took 5m17.548124184s
	W1205 20:56:49.660130   46700 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:49.660162   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:51.776117   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:54.275521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:52.099610   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:56:52.106838   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:56:52.109813   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:56:52.109835   47365 api_server.go:131] duration metric: took 4.086114093s to wait for apiserver health ...
	I1205 20:56:52.109845   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:56:52.109874   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:52.109929   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:52.155290   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:52.155319   47365 cri.go:89] found id: ""
	I1205 20:56:52.155328   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:52.155382   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.160069   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:52.160137   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:52.197857   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.197885   47365 cri.go:89] found id: ""
	I1205 20:56:52.197894   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:52.197956   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.203012   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:52.203075   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:52.257881   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.257904   47365 cri.go:89] found id: ""
	I1205 20:56:52.257914   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:52.257972   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.264817   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:52.264899   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:52.313302   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.313331   47365 cri.go:89] found id: ""
	I1205 20:56:52.313341   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:52.313398   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.318864   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:52.318972   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:52.389306   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.389333   47365 cri.go:89] found id: ""
	I1205 20:56:52.389342   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:52.389400   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.406125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:52.406194   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:52.458735   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:52.458760   47365 cri.go:89] found id: ""
	I1205 20:56:52.458770   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:52.458821   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.463571   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:52.463642   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:52.529035   47365 cri.go:89] found id: ""
	I1205 20:56:52.529067   47365 logs.go:284] 0 containers: []
	W1205 20:56:52.529079   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:52.529088   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:52.529157   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:52.583543   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:52.583578   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.583585   47365 cri.go:89] found id: ""
	I1205 20:56:52.583594   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:52.583649   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.589299   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.595000   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:52.595024   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.671447   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:52.671487   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.719185   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:52.719223   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:52.780173   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:52.780203   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.823808   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:52.823843   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.874394   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:52.874428   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:52.938139   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:52.938177   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.982386   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:52.982414   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:53.029082   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:53.029111   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:53.447057   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:53.447099   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:53.465029   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:53.465066   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:53.627351   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:53.627400   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:53.694357   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:53.694393   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:56.267579   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:56:56.267614   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.267624   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.267631   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.267638   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.267644   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.267650   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.267660   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.267672   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.267683   47365 system_pods.go:74] duration metric: took 4.157830691s to wait for pod list to return data ...
	I1205 20:56:56.267696   47365 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:56:56.271148   47365 default_sa.go:45] found service account: "default"
	I1205 20:56:56.271170   47365 default_sa.go:55] duration metric: took 3.468435ms for default service account to be created ...
	I1205 20:56:56.271176   47365 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:56:56.277630   47365 system_pods.go:86] 8 kube-system pods found
	I1205 20:56:56.277654   47365 system_pods.go:89] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.277660   47365 system_pods.go:89] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.277665   47365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.277669   47365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.277674   47365 system_pods.go:89] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.277679   47365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.277688   47365 system_pods.go:89] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.277696   47365 system_pods.go:89] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.277715   47365 system_pods.go:126] duration metric: took 6.533492ms to wait for k8s-apps to be running ...
	I1205 20:56:56.277726   47365 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:56:56.277772   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:56.296846   47365 system_svc.go:56] duration metric: took 19.109991ms WaitForService to wait for kubelet.
	I1205 20:56:56.296877   47365 kubeadm.go:581] duration metric: took 4m23.516618576s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:56:56.296902   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:56:56.301504   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:56:56.301530   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:56:56.301542   47365 node_conditions.go:105] duration metric: took 4.634882ms to run NodePressure ...
	I1205 20:56:56.301552   47365 start.go:228] waiting for startup goroutines ...
	I1205 20:56:56.301560   47365 start.go:233] waiting for cluster config update ...
	I1205 20:56:56.301573   47365 start.go:242] writing updated cluster config ...
	I1205 20:56:56.301859   47365 ssh_runner.go:195] Run: rm -f paused
	I1205 20:56:56.357189   47365 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:56:56.358798   47365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-463614" cluster and "default" namespace by default
	I1205 20:56:54.756702   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096505481s)
	I1205 20:56:54.756786   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:54.774684   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:54.786308   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:54.796762   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:54.796809   46700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1205 20:56:55.081318   46700 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:58.569752   46866 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1205 20:56:58.569873   46866 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:56:58.569988   46866 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:56:58.570119   46866 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:56:58.570261   46866 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:56:58.570368   46866 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:56:58.572785   46866 out.go:204]   - Generating certificates and keys ...
	I1205 20:56:58.573020   46866 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:56:58.573232   46866 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:56:58.573410   46866 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:56:58.573510   46866 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:56:58.573717   46866 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:56:58.573868   46866 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:56:58.574057   46866 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:56:58.574229   46866 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:56:58.574517   46866 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:56:58.574760   46866 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:56:58.574903   46866 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:56:58.575070   46866 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:56:58.575205   46866 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:56:58.575363   46866 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:56:58.575515   46866 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:56:58.575600   46866 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:56:58.575799   46866 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:56:58.576083   46866 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:56:58.576320   46866 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:56:58.580654   46866 out.go:204]   - Booting up control plane ...
	I1205 20:56:58.581337   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:56:58.581851   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:56:58.582029   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:56:58.582667   46866 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:56:58.582988   46866 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:56:58.583126   46866 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:56:58.583631   46866 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:56:58.583908   46866 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502137 seconds
	I1205 20:56:58.584157   46866 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:56:58.584637   46866 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:56:58.584882   46866 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:56:58.585370   46866 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:56:58.585492   46866 kubeadm.go:322] [bootstrap-token] Using token: fap3k3.pr3uz4d90n7oyvds
	I1205 20:56:58.590063   46866 out.go:204]   - Configuring RBAC rules ...
	I1205 20:56:58.590356   46866 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:56:58.590482   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:56:58.590692   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:56:58.590887   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:56:58.591031   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:56:58.591131   46866 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:56:58.591269   46866 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:56:58.591323   46866 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:56:58.591378   46866 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:56:58.591383   46866 kubeadm.go:322] 
	I1205 20:56:58.591455   46866 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:56:58.591462   46866 kubeadm.go:322] 
	I1205 20:56:58.591554   46866 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:56:58.591559   46866 kubeadm.go:322] 
	I1205 20:56:58.591590   46866 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:56:58.591659   46866 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:56:58.591719   46866 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:56:58.591724   46866 kubeadm.go:322] 
	I1205 20:56:58.591787   46866 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:56:58.591793   46866 kubeadm.go:322] 
	I1205 20:56:58.591848   46866 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:56:58.591853   46866 kubeadm.go:322] 
	I1205 20:56:58.591914   46866 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:56:58.592015   46866 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:56:58.592093   46866 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:56:58.592099   46866 kubeadm.go:322] 
	I1205 20:56:58.592197   46866 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:56:58.592300   46866 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:56:58.592306   46866 kubeadm.go:322] 
	I1205 20:56:58.592403   46866 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592525   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:56:58.592550   46866 kubeadm.go:322] 	--control-plane 
	I1205 20:56:58.592558   46866 kubeadm.go:322] 
	I1205 20:56:58.592645   46866 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:56:58.592650   46866 kubeadm.go:322] 
	I1205 20:56:58.592743   46866 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592870   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:56:58.592880   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:56:58.592889   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:56:58.594456   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:56:56.773764   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.778395   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.595862   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:56:58.625177   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:56:58.683896   46866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:56:58.683977   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.684060   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=no-preload-143651 minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.741242   46866 ops.go:34] apiserver oom_adj: -16
	I1205 20:56:59.114129   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.238212   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.869086   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:00.368538   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.272299   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:03.272604   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:04.992619   46374 pod_ready.go:81] duration metric: took 4m0.000553964s waiting for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:04.992652   46374 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:57:04.992691   46374 pod_ready.go:38] duration metric: took 4m13.609186276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:04.992726   46374 kubeadm.go:640] restartCluster took 4m33.586092425s
	W1205 20:57:04.992782   46374 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:57:04.992808   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:57:00.868500   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.369084   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.368409   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.869341   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.368765   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.869054   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.368855   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.869144   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:05.368635   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.047040   46700 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1205 20:57:09.047132   46700 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:09.047236   46700 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:09.047350   46700 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:09.047462   46700 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:09.047583   46700 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:09.047693   46700 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:09.047752   46700 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1205 20:57:09.047825   46700 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:09.049606   46700 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:09.049706   46700 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:09.049802   46700 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:09.049885   46700 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:09.049963   46700 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:09.050058   46700 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:09.050148   46700 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:09.050235   46700 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:09.050350   46700 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:09.050468   46700 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:09.050563   46700 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:09.050627   46700 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:09.050732   46700 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:09.050817   46700 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:09.050897   46700 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:09.050997   46700 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:09.051080   46700 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:09.051165   46700 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:09.052610   46700 out.go:204]   - Booting up control plane ...
	I1205 20:57:09.052722   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:09.052806   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:09.052870   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:09.052965   46700 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:09.053103   46700 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:09.053203   46700 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005642 seconds
	I1205 20:57:09.053354   46700 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:09.053514   46700 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:09.053563   46700 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:09.053701   46700 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-061206 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 20:57:09.053783   46700 kubeadm.go:322] [bootstrap-token] Using token: syik3l.i77juzhd1iybx3my
	I1205 20:57:09.055286   46700 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:09.055409   46700 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:09.055599   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:09.055749   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:09.055862   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:09.055982   46700 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:09.056043   46700 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:09.056106   46700 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:09.056116   46700 kubeadm.go:322] 
	I1205 20:57:09.056197   46700 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:09.056207   46700 kubeadm.go:322] 
	I1205 20:57:09.056307   46700 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:09.056329   46700 kubeadm.go:322] 
	I1205 20:57:09.056377   46700 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:09.056456   46700 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:09.056533   46700 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:09.056540   46700 kubeadm.go:322] 
	I1205 20:57:09.056600   46700 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:09.056669   46700 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:09.056729   46700 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:09.056737   46700 kubeadm.go:322] 
	I1205 20:57:09.056804   46700 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1205 20:57:09.056868   46700 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:09.056874   46700 kubeadm.go:322] 
	I1205 20:57:09.056944   46700 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057093   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:09.057135   46700 kubeadm.go:322]     --control-plane 	  
	I1205 20:57:09.057150   46700 kubeadm.go:322] 
	I1205 20:57:09.057252   46700 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:09.057260   46700 kubeadm.go:322] 
	I1205 20:57:09.057360   46700 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057502   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:09.057514   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:57:09.057520   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:09.058762   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:05.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.368434   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.869228   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.369175   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.868933   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.369028   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.868920   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.369223   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.869130   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.369240   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.869318   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.369189   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.576975   46866 kubeadm.go:1088] duration metric: took 12.893071134s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:11.577015   46866 kubeadm.go:406] StartCluster complete in 5m9.690903424s
	I1205 20:57:11.577039   46866 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.577129   46866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:11.579783   46866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.580131   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:11.580364   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:57:11.580360   46866 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:11.580446   46866 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143651"
	I1205 20:57:11.580467   46866 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143651"
	W1205 20:57:11.580479   46866 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:11.580518   46866 addons.go:69] Setting metrics-server=true in profile "no-preload-143651"
	I1205 20:57:11.580535   46866 addons.go:231] Setting addon metrics-server=true in "no-preload-143651"
	W1205 20:57:11.580544   46866 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:11.580575   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580583   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580982   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580994   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580497   46866 addons.go:69] Setting default-storageclass=true in profile "no-preload-143651"
	I1205 20:57:11.581018   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581027   46866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143651"
	I1205 20:57:11.581303   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581357   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.581383   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.600887   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1205 20:57:11.600886   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I1205 20:57:11.601552   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601681   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601760   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I1205 20:57:11.602152   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602177   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602260   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.602348   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602370   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602603   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602719   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602806   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.602996   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.603020   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.603329   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.603379   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.603477   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.603997   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.604040   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.606962   46866 addons.go:231] Setting addon default-storageclass=true in "no-preload-143651"
	W1205 20:57:11.606986   46866 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:11.607009   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.607331   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.607363   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.624885   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I1205 20:57:11.625358   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.625857   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.625869   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.626331   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.626627   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I1205 20:57:11.626832   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.627179   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.631282   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I1205 20:57:11.632431   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.632516   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.632599   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.632763   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.633113   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.633639   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.633883   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.634495   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.634539   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.634823   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.637060   46866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:11.635196   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.641902   46866 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:11.641932   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:11.641960   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.642616   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.644862   46866 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:11.647090   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:11.647113   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:11.647134   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.646852   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647539   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.647564   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647755   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.648063   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.648295   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.648520   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.654458   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.654493   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654522   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.654556   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654801   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.655015   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.655247   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.661244   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I1205 20:57:11.661886   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.662508   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.662534   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.663651   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.663907   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.666067   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.666501   46866 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.666523   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:11.666543   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.669659   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670106   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.670132   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670479   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.670673   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.670802   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.670915   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.816687   46866 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143651" context rescaled to 1 replicas
	I1205 20:57:11.816742   46866 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:11.820014   46866 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:09.060305   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:09.069861   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:09.093691   46700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:09.093847   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.093914   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=old-k8s-version-061206 minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.123857   46700 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:09.315555   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.435904   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.049845   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.549703   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.049931   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.549848   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.049776   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.549841   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.050053   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.549531   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.821903   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:11.831116   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:11.867528   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.969463   46866 node_ready.go:35] waiting up to 6m0s for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:11.976207   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:11.976235   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:11.977230   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:12.003110   46866 node_ready.go:49] node "no-preload-143651" has status "Ready":"True"
	I1205 20:57:12.003132   46866 node_ready.go:38] duration metric: took 33.629273ms waiting for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:12.003142   46866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:12.053173   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:12.053208   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:12.140411   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:12.170492   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.170521   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:12.251096   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.778963   46866 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:12.779026   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779040   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779377   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779402   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.779411   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779411   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.779418   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779625   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779665   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.786021   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.786045   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.786331   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.786380   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.786400   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194477   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217217088s)
	I1205 20:57:13.194529   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194543   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.194883   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.194929   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.194948   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194960   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194970   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.195198   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.195212   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562441   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311301688s)
	I1205 20:57:13.562496   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562512   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.562826   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.562845   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562856   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562865   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.563115   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.563164   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.563177   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.563190   46866 addons.go:467] Verifying addon metrics-server=true in "no-preload-143651"
	I1205 20:57:13.564940   46866 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:13.566316   46866 addons.go:502] enable addons completed in 1.985974766s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:14.389400   46866 pod_ready.go:102] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:15.388445   46866 pod_ready.go:92] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.388478   46866 pod_ready.go:81] duration metric: took 3.248030471s waiting for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.388493   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.391728   46866 pod_ready.go:97] error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391759   46866 pod_ready.go:81] duration metric: took 3.251498ms waiting for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:15.391772   46866 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391781   46866 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399725   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.399745   46866 pod_ready.go:81] duration metric: took 7.956804ms waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399759   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407412   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.407436   46866 pod_ready.go:81] duration metric: took 7.672123ms waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407446   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414249   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.414295   46866 pod_ready.go:81] duration metric: took 6.840313ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414309   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587237   46866 pod_ready.go:92] pod "kube-proxy-6txsz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.587271   46866 pod_ready.go:81] duration metric: took 172.95478ms waiting for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587286   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985901   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.985930   46866 pod_ready.go:81] duration metric: took 398.634222ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985943   46866 pod_ready.go:38] duration metric: took 3.982790764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:15.985960   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:15.986019   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:16.009052   46866 api_server.go:72] duration metric: took 4.192253908s to wait for apiserver process to appear ...
	I1205 20:57:16.009082   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:16.009100   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:57:16.014689   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:57:16.015758   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:57:16.015781   46866 api_server.go:131] duration metric: took 6.691652ms to wait for apiserver health ...
	I1205 20:57:16.015791   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:16.188198   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:16.188232   46866 system_pods.go:61] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.188240   46866 system_pods.go:61] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.188246   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.188254   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.188261   46866 system_pods.go:61] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.188267   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.188279   46866 system_pods.go:61] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.188290   46866 system_pods.go:61] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.188301   46866 system_pods.go:74] duration metric: took 172.503422ms to wait for pod list to return data ...
	I1205 20:57:16.188311   46866 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:16.384722   46866 default_sa.go:45] found service account: "default"
	I1205 20:57:16.384759   46866 default_sa.go:55] duration metric: took 196.435091ms for default service account to be created ...
	I1205 20:57:16.384769   46866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:16.587515   46866 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:16.587542   46866 system_pods.go:89] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.587547   46866 system_pods.go:89] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.587554   46866 system_pods.go:89] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.587561   46866 system_pods.go:89] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.587567   46866 system_pods.go:89] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.587574   46866 system_pods.go:89] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.587585   46866 system_pods.go:89] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.587593   46866 system_pods.go:89] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.587604   46866 system_pods.go:126] duration metric: took 202.829744ms to wait for k8s-apps to be running ...
	I1205 20:57:16.587613   46866 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:16.587654   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:16.602489   46866 system_svc.go:56] duration metric: took 14.864421ms WaitForService to wait for kubelet.
	I1205 20:57:16.602521   46866 kubeadm.go:581] duration metric: took 4.785728725s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:16.602545   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:16.785610   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:16.785646   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:16.785663   46866 node_conditions.go:105] duration metric: took 183.112914ms to run NodePressure ...
	I1205 20:57:16.785677   46866 start.go:228] waiting for startup goroutines ...
	I1205 20:57:16.785686   46866 start.go:233] waiting for cluster config update ...
	I1205 20:57:16.785705   46866 start.go:242] writing updated cluster config ...
	I1205 20:57:16.786062   46866 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:16.840981   46866 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1205 20:57:16.842980   46866 out.go:177] * Done! kubectl is now configured to use "no-preload-143651" cluster and "default" namespace by default
	I1205 20:57:14.049305   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:14.549423   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.050061   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.550221   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.049450   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.550094   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.049900   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.549923   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.050255   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.549399   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.615362   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.62253521s)
	I1205 20:57:19.615425   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:19.633203   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:57:19.643629   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:57:19.653655   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:57:19.653717   46374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:57:19.709748   46374 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:57:19.709836   46374 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:19.887985   46374 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:19.888143   46374 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:19.888243   46374 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:20.145182   46374 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:20.147189   46374 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:20.147319   46374 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:20.147389   46374 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:20.147482   46374 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:20.147875   46374 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:20.148583   46374 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:20.149486   46374 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:20.150362   46374 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:20.150974   46374 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:20.151523   46374 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:20.152166   46374 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:20.152419   46374 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:20.152504   46374 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:20.435395   46374 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:20.606951   46374 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:20.754435   46374 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:20.953360   46374 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:20.954288   46374 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:20.958413   46374 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:19.049689   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.549608   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.049856   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.550245   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.050001   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.549839   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.049908   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.549764   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.050204   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.550196   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.049420   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.550152   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.050103   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.202067   46700 kubeadm.go:1088] duration metric: took 16.108268519s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:25.202100   46700 kubeadm.go:406] StartCluster complete in 5m53.142100786s
	I1205 20:57:25.202121   46700 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.202211   46700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:25.204920   46700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.205284   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:25.205635   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:57:25.205792   46700 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:25.205865   46700 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-061206"
	I1205 20:57:25.205888   46700 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-061206"
	W1205 20:57:25.205896   46700 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:25.205954   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.205982   46700 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206011   46700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-061206"
	I1205 20:57:25.206429   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206436   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206457   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206459   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206517   46700 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206531   46700 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-061206"
	W1205 20:57:25.206538   46700 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:25.206578   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.206906   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206936   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.228876   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I1205 20:57:25.228902   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1205 20:57:25.229036   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I1205 20:57:25.229487   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229569   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229646   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.230209   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230230   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230413   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230426   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230468   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230492   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230851   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.231494   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.231520   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.231955   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.232544   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.232578   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.233084   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.233307   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.237634   46700 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-061206"
	W1205 20:57:25.237660   46700 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:25.237691   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.238103   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.238138   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.252274   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I1205 20:57:25.252709   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.253307   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.253327   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.253689   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.253874   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.255891   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.258376   46700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:25.256849   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I1205 20:57:25.260119   46700 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.260145   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:25.260168   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.261358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.262042   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.262063   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.262590   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.262765   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.265705   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.265905   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.267942   46700 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:25.266347   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.266528   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.269653   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.269661   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:25.269687   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:25.269708   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.270383   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.270602   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.270764   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.274415   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.274914   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.274939   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.275267   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.275451   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.275594   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.275736   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.282847   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1205 20:57:25.283552   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.284174   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.284192   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.284659   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.285434   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.285469   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.306845   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1205 20:57:25.307358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.307884   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.307905   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.308302   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.308605   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.310363   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.310649   46700 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.310663   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:25.310682   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.313904   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314451   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.314482   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314756   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.314941   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.315053   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.315153   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.456874   46700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-061206" context rescaled to 1 replicas
	I1205 20:57:25.456922   46700 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:25.459008   46700 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:20.960444   46374 out.go:204]   - Booting up control plane ...
	I1205 20:57:20.960603   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:20.960721   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:20.961220   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:20.981073   46374 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:20.982383   46374 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:20.982504   46374 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:57:21.127167   46374 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:25.460495   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:25.531367   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.531600   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:25.531618   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:25.543589   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.624622   46700 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.624655   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:25.660979   46700 node_ready.go:49] node "old-k8s-version-061206" has status "Ready":"True"
	I1205 20:57:25.661005   46700 node_ready.go:38] duration metric: took 36.286483ms waiting for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.661017   46700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:25.666179   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:25.666208   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:25.796077   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:26.018114   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.018141   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:26.124357   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.905138   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.37373154s)
	I1205 20:57:26.905210   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905526   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905553   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.905567   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905576   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:26.905905   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905917   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964563   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.964593   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.964920   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.964940   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964974   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465231   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.92160273s)
	I1205 20:57:27.465236   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.840348969s)
	I1205 20:57:27.465312   46700 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:27.465289   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465379   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.465718   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465761   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.465771   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.465780   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465790   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.467788   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.467820   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.467829   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628166   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.503702639s)
	I1205 20:57:27.628242   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628262   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628592   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628617   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628627   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628637   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628714   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.628851   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628866   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628885   46700 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-061206"
	I1205 20:57:27.632134   46700 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:27.634065   46700 addons.go:502] enable addons completed in 2.428270131s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:28.052082   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:29.630980   46374 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503524 seconds
	I1205 20:57:29.631109   46374 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:29.651107   46374 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:30.184174   46374 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:30.184401   46374 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-331495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:57:30.703275   46374 kubeadm.go:322] [bootstrap-token] Using token: 28cbrl.nve3765a0enwbcr0
	I1205 20:57:30.705013   46374 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:30.705155   46374 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:30.718386   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:57:30.727275   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:30.734448   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:30.741266   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:30.746706   46374 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:30.765198   46374 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:57:31.046194   46374 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:31.133417   46374 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:31.133438   46374 kubeadm.go:322] 
	I1205 20:57:31.133501   46374 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:31.133509   46374 kubeadm.go:322] 
	I1205 20:57:31.133647   46374 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:31.133667   46374 kubeadm.go:322] 
	I1205 20:57:31.133707   46374 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:31.133781   46374 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:31.133853   46374 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:31.133863   46374 kubeadm.go:322] 
	I1205 20:57:31.133918   46374 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:57:31.133925   46374 kubeadm.go:322] 
	I1205 20:57:31.133983   46374 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:57:31.133993   46374 kubeadm.go:322] 
	I1205 20:57:31.134042   46374 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:31.134103   46374 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:31.134262   46374 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:31.134300   46374 kubeadm.go:322] 
	I1205 20:57:31.134417   46374 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:57:31.134526   46374 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:31.134541   46374 kubeadm.go:322] 
	I1205 20:57:31.134671   46374 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.134823   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:31.134858   46374 kubeadm.go:322] 	--control-plane 
	I1205 20:57:31.134867   46374 kubeadm.go:322] 
	I1205 20:57:31.134986   46374 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:31.134997   46374 kubeadm.go:322] 
	I1205 20:57:31.135114   46374 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.135272   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:31.135908   46374 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:57:31.135934   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:57:31.135944   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:31.137845   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:30.540402   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:33.040756   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:31.139429   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:31.181897   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:31.202833   46374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:31.202901   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.202910   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=embed-certs-331495 minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.298252   46374 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:31.569929   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.694250   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.294912   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.795323   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.295495   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.794998   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.294843   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.794730   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:35.295505   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.538542   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.538568   46700 pod_ready.go:81] duration metric: took 8.742457359s waiting for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.538579   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.540738   46700 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540763   46700 pod_ready.go:81] duration metric: took 2.177251ms waiting for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:34.540771   46700 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540777   46700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545336   46700 pod_ready.go:92] pod "kube-proxy-j68qr" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.545360   46700 pod_ready.go:81] duration metric: took 4.576584ms waiting for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545370   46700 pod_ready.go:38] duration metric: took 8.884340587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:34.545387   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:34.545442   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:34.561744   46700 api_server.go:72] duration metric: took 9.104792218s to wait for apiserver process to appear ...
	I1205 20:57:34.561769   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:34.561786   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:57:34.568456   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:57:34.569584   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:57:34.569608   46700 api_server.go:131] duration metric: took 7.832231ms to wait for apiserver health ...
	I1205 20:57:34.569618   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:34.573936   46700 system_pods.go:59] 4 kube-system pods found
	I1205 20:57:34.573962   46700 system_pods.go:61] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.573969   46700 system_pods.go:61] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.573979   46700 system_pods.go:61] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.573989   46700 system_pods.go:61] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.574004   46700 system_pods.go:74] duration metric: took 4.378461ms to wait for pod list to return data ...
	I1205 20:57:34.574016   46700 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:34.577236   46700 default_sa.go:45] found service account: "default"
	I1205 20:57:34.577258   46700 default_sa.go:55] duration metric: took 3.232577ms for default service account to be created ...
	I1205 20:57:34.577268   46700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:34.581061   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.581080   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.581086   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.581093   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.581098   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.581112   46700 retry.go:31] will retry after 312.287284ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:34.898504   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.898531   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.898536   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.898545   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.898549   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.898563   46700 retry.go:31] will retry after 340.858289ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.244211   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.244237   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.244242   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.244249   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.244253   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.244267   46700 retry.go:31] will retry after 398.30611ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.649011   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.649042   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.649050   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.649061   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.649068   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.649086   46700 retry.go:31] will retry after 397.404602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.052047   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.052079   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.052087   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.052097   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.052105   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.052124   46700 retry.go:31] will retry after 604.681853ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.662177   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.662206   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.662213   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.662223   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.662229   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.662247   46700 retry.go:31] will retry after 732.227215ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:37.399231   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:37.399264   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:37.399272   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:37.399282   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:37.399289   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:37.399308   46700 retry.go:31] will retry after 1.17612773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.795241   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.295081   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.795352   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.295506   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.794785   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.294797   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.794948   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.295478   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.795706   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:40.295444   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.581173   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:38.581201   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:38.581207   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:38.581220   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:38.581225   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:38.581239   46700 retry.go:31] will retry after 1.118915645s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:39.704807   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:39.704835   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:39.704841   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:39.704847   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:39.704854   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:39.704872   46700 retry.go:31] will retry after 1.49556329s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:41.205278   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:41.205316   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:41.205324   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:41.205331   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:41.205336   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:41.205357   46700 retry.go:31] will retry after 2.273757829s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:43.485079   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:43.485109   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:43.485125   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:43.485132   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:43.485137   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:43.485153   46700 retry.go:31] will retry after 2.2120181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:40.794725   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.295631   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.795542   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.295514   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.795481   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.295525   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.795463   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.295442   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.451570   46374 kubeadm.go:1088] duration metric: took 13.248732973s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:44.451605   46374 kubeadm.go:406] StartCluster complete in 5m13.096778797s
	I1205 20:57:44.451631   46374 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.451730   46374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:44.454306   46374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.454587   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:44.454611   46374 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:44.454695   46374 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-331495"
	I1205 20:57:44.454720   46374 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-331495"
	W1205 20:57:44.454731   46374 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:44.454766   46374 addons.go:69] Setting default-storageclass=true in profile "embed-certs-331495"
	I1205 20:57:44.454781   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.454783   46374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-331495"
	I1205 20:57:44.454840   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:57:44.454884   46374 addons.go:69] Setting metrics-server=true in profile "embed-certs-331495"
	I1205 20:57:44.454899   46374 addons.go:231] Setting addon metrics-server=true in "embed-certs-331495"
	W1205 20:57:44.454907   46374 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:44.454949   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.455191   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455213   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455216   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455231   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455237   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455259   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.473063   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I1205 20:57:44.473083   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I1205 20:57:44.473135   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I1205 20:57:44.473509   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.473642   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474153   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474171   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474179   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474197   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474336   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474566   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474637   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474761   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474785   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474877   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.475234   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475260   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.475295   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.475833   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475871   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.478828   46374 addons.go:231] Setting addon default-storageclass=true in "embed-certs-331495"
	W1205 20:57:44.478852   46374 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:44.478882   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.479277   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.479311   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.493193   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1205 20:57:44.493380   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1205 20:57:44.493637   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.493775   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.494092   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494108   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494242   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494252   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494488   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494624   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.494682   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.496908   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.497156   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.498954   46374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:44.500583   46374 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:44.499205   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1205 20:57:44.502186   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:44.502199   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:44.502214   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.502313   46374 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.502329   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:44.502349   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.503728   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.504065   46374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-331495" context rescaled to 1 replicas
	I1205 20:57:44.504105   46374 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:44.505773   46374 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:44.507622   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:44.505350   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.507719   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.505638   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.507792   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.507821   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.506710   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.507399   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508237   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.508287   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508353   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.508369   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508440   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.508506   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.508671   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508678   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.508996   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.509016   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.509373   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.509567   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.525720   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I1205 20:57:44.526352   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.526817   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.526831   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.527096   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.527248   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.529415   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.529714   46374 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.529725   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:44.529737   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.532475   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533019   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.533042   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.533393   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.533527   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.533614   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.688130   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:44.688235   46374 node_ready.go:35] waiting up to 6m0s for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727420   46374 node_ready.go:49] node "embed-certs-331495" has status "Ready":"True"
	I1205 20:57:44.727442   46374 node_ready.go:38] duration metric: took 39.185885ms waiting for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727450   46374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:44.732130   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:44.732147   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:44.738201   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.771438   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.811415   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:44.811441   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:44.813276   46374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:44.891164   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:44.891188   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:44.982166   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:46.640482   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.952307207s)
	I1205 20:57:46.640514   46374 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:46.640492   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.902257941s)
	I1205 20:57:46.640549   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640567   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.640954   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.640974   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.640985   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640994   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.641299   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.641316   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.641317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669046   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.669072   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.669393   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669467   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.669486   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229043   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.457564146s)
	I1205 20:57:47.229106   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229122   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.229427   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.229442   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229451   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229460   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.230375   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:47.230383   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.230399   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.269645   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.287430037s)
	I1205 20:57:47.269701   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.269717   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270028   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270044   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270053   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.270062   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270370   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270387   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270397   46374 addons.go:467] Verifying addon metrics-server=true in "embed-certs-331495"
	I1205 20:57:47.272963   46374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:45.704352   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:45.704382   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:45.704392   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:45.704402   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:45.704408   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:45.704427   46700 retry.go:31] will retry after 3.581529213s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:47.274340   46374 addons.go:502] enable addons completed in 2.819728831s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:47.280325   46374 pod_ready.go:102] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:48.746184   46374 pod_ready.go:92] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.746205   46374 pod_ready.go:81] duration metric: took 3.932903963s waiting for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.746212   46374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752060   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.752078   46374 pod_ready.go:81] duration metric: took 5.859638ms waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752088   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757347   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.757367   46374 pod_ready.go:81] duration metric: took 5.273527ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757375   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762850   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.762869   46374 pod_ready.go:81] duration metric: took 5.4878ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762876   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767874   46374 pod_ready.go:92] pod "kube-proxy-tbr8k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.767896   46374 pod_ready.go:81] duration metric: took 5.013139ms waiting for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767907   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141813   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:49.141836   46374 pod_ready.go:81] duration metric: took 373.922185ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141844   46374 pod_ready.go:38] duration metric: took 4.414384404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:49.141856   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:49.141898   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:49.156536   46374 api_server.go:72] duration metric: took 4.652397468s to wait for apiserver process to appear ...
	I1205 20:57:49.156566   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:49.156584   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:57:49.162837   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:57:49.164588   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:57:49.164606   46374 api_server.go:131] duration metric: took 8.03498ms to wait for apiserver health ...
	I1205 20:57:49.164613   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:49.346033   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:49.346065   46374 system_pods.go:61] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.346069   46374 system_pods.go:61] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.346074   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.346079   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.346082   46374 system_pods.go:61] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.346086   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.346092   46374 system_pods.go:61] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.346098   46374 system_pods.go:61] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:57:49.346105   46374 system_pods.go:74] duration metric: took 181.48718ms to wait for pod list to return data ...
	I1205 20:57:49.346111   46374 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:49.541758   46374 default_sa.go:45] found service account: "default"
	I1205 20:57:49.541783   46374 default_sa.go:55] duration metric: took 195.666774ms for default service account to be created ...
	I1205 20:57:49.541791   46374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:49.746101   46374 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:49.746131   46374 system_pods.go:89] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.746136   46374 system_pods.go:89] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.746142   46374 system_pods.go:89] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.746147   46374 system_pods.go:89] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.746150   46374 system_pods.go:89] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.746155   46374 system_pods.go:89] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.746170   46374 system_pods.go:89] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.746175   46374 system_pods.go:89] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Running
	I1205 20:57:49.746183   46374 system_pods.go:126] duration metric: took 204.388635ms to wait for k8s-apps to be running ...
	I1205 20:57:49.746193   46374 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:49.746241   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:49.764758   46374 system_svc.go:56] duration metric: took 18.554759ms WaitForService to wait for kubelet.
	I1205 20:57:49.764784   46374 kubeadm.go:581] duration metric: took 5.260652386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:49.764801   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:49.942067   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:49.942095   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:49.942105   46374 node_conditions.go:105] duration metric: took 177.300297ms to run NodePressure ...
	I1205 20:57:49.942114   46374 start.go:228] waiting for startup goroutines ...
	I1205 20:57:49.942120   46374 start.go:233] waiting for cluster config update ...
	I1205 20:57:49.942129   46374 start.go:242] writing updated cluster config ...
	I1205 20:57:49.942407   46374 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:49.995837   46374 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:57:49.997691   46374 out.go:177] * Done! kubectl is now configured to use "embed-certs-331495" cluster and "default" namespace by default
	I1205 20:57:49.291672   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:49.291700   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:49.291705   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:49.291713   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.291718   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:49.291736   46700 retry.go:31] will retry after 3.015806566s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:52.313677   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:52.313703   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:52.313711   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:52.313721   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:52.313727   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:52.313747   46700 retry.go:31] will retry after 4.481475932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:56.804282   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:56.804308   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:56.804314   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:56.804321   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:56.804325   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:56.804340   46700 retry.go:31] will retry after 6.744179014s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:03.556623   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:58:03.556652   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:03.556660   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:03.556669   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:03.556676   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:03.556696   46700 retry.go:31] will retry after 7.974872066s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:11.540488   46700 system_pods.go:86] 6 kube-system pods found
	I1205 20:58:11.540516   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:11.540522   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Pending
	I1205 20:58:11.540526   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Pending
	I1205 20:58:11.540530   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:11.540537   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:11.540541   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:11.540556   46700 retry.go:31] will retry after 10.29278609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:21.841415   46700 system_pods.go:86] 7 kube-system pods found
	I1205 20:58:21.841442   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:21.841450   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:21.841457   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:21.841463   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:21.841468   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:21.841478   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:21.841485   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:21.841503   46700 retry.go:31] will retry after 10.997616244s: missing components: kube-scheduler
	I1205 20:58:32.846965   46700 system_pods.go:86] 8 kube-system pods found
	I1205 20:58:32.846999   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:32.847007   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:32.847016   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:32.847023   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:32.847028   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:32.847032   46700 system_pods.go:89] "kube-scheduler-old-k8s-version-061206" [e19a40ac-ac9b-4dc8-8ed3-c13da266bb88] Running
	I1205 20:58:32.847041   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:32.847049   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:32.847061   46700 system_pods.go:126] duration metric: took 58.26978612s to wait for k8s-apps to be running ...
	I1205 20:58:32.847074   46700 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:58:32.847122   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:58:32.866233   46700 system_svc.go:56] duration metric: took 19.150294ms WaitForService to wait for kubelet.
	I1205 20:58:32.866267   46700 kubeadm.go:581] duration metric: took 1m7.409317219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:58:32.866308   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:58:32.870543   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:58:32.870569   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:58:32.870581   46700 node_conditions.go:105] duration metric: took 4.266682ms to run NodePressure ...
	I1205 20:58:32.870604   46700 start.go:228] waiting for startup goroutines ...
	I1205 20:58:32.870630   46700 start.go:233] waiting for cluster config update ...
	I1205 20:58:32.870646   46700 start.go:242] writing updated cluster config ...
	I1205 20:58:32.870888   46700 ssh_runner.go:195] Run: rm -f paused
	I1205 20:58:32.922554   46700 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1205 20:58:32.924288   46700 out.go:177] 
	W1205 20:58:32.925788   46700 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1205 20:58:32.927148   46700 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1205 20:58:32.928730   46700 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-061206" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:51:34 UTC, ends at Tue 2023-12-05 21:06:18 UTC. --
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.542518039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810378542506283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=efc39433-bbab-4be7-8ca7-7237a8973fcc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.543088594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=71baa620-9813-4d0e-be45-801f031078c6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.543133629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=71baa620-9813-4d0e-be45-801f031078c6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.543348528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=71baa620-9813-4d0e-be45-801f031078c6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.596016707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=48c8fc28-6f9b-4cbf-8f8c-c189917c4394 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.596096077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=48c8fc28-6f9b-4cbf-8f8c-c189917c4394 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.597606891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2f6673ea-e358-4fd3-b4bb-0cf7784454d3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.598210127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810378598193640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=2f6673ea-e358-4fd3-b4bb-0cf7784454d3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.599518371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c4970635-5de7-499e-b65b-18a82ac63b44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.599583777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c4970635-5de7-499e-b65b-18a82ac63b44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.599867457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c4970635-5de7-499e-b65b-18a82ac63b44 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.662505050Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6c32cc92-9932-4ed3-8274-3f5141c706cb name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.662588599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6c32cc92-9932-4ed3-8274-3f5141c706cb name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.666411970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=364a4c68-3b13-4d40-8d04-07abb8c5c984 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.666999280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810378666978898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=364a4c68-3b13-4d40-8d04-07abb8c5c984 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.668332069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3bd2dcef-e308-4f5b-b67f-8bbff65bb571 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.668497616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3bd2dcef-e308-4f5b-b67f-8bbff65bb571 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.668890021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3bd2dcef-e308-4f5b-b67f-8bbff65bb571 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.710945828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b14a0227-eff2-4174-aa47-d8d8fcaefd70 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.711002919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b14a0227-eff2-4174-aa47-d8d8fcaefd70 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.717255805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=89bc5192-493a-411b-9e93-287b8209e8ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.717603538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810378717555393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=89bc5192-493a-411b-9e93-287b8209e8ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.718648975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25a89fa9-5423-4b34-a804-73a398ca61ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.718785552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25a89fa9-5423-4b34-a804-73a398ca61ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:18 no-preload-143651 crio[706]: time="2023-12-05 21:06:18.718944728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25a89fa9-5423-4b34-a804-73a398ca61ae name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	608d2cd91d615       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a4cf96b4b71fa       storage-provisioner
	a91f9eeb6cc7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   10d4291f05a4e       coredns-76f75df574-4n2wg
	88a3c5c33dce7       86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff   9 minutes ago       Running             kube-proxy                0                   c91321d2a1ba8       kube-proxy-6txsz
	0c9415fdfc010       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   525b07ad59b91       etcd-no-preload-143651
	9c8a5763db1e6       5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956   9 minutes ago       Running             kube-apiserver            2                   dab0b07edd952       kube-apiserver-no-preload-143651
	8a7cfeb23c032       b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09   9 minutes ago       Running             kube-controller-manager   2                   f8a1fe2755ce1       kube-controller-manager-no-preload-143651
	a31fcd9330606       b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542   9 minutes ago       Running             kube-scheduler            2                   f64d6e78b581c       kube-scheduler-no-preload-143651
	
	* 
	* ==> coredns [a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38467 - 37436 "HINFO IN 3778141838030031282.8307257075047644438. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010644248s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-143651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-143651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=no-preload-143651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:56:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-143651
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 21:06:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:02:25 +0000   Tue, 05 Dec 2023 20:56:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:02:25 +0000   Tue, 05 Dec 2023 20:56:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:02:25 +0000   Tue, 05 Dec 2023 20:56:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:02:25 +0000   Tue, 05 Dec 2023 20:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.162
	  Hostname:    no-preload-143651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a90a425d63b6431e94a42f715d9da1ce
	  System UUID:                a90a425d-63b6-431e-94a4-2f715d9da1ce
	  Boot ID:                    c0f23393-24ab-4ed0-8ede-e74c7715efea
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.1
	  Kube-Proxy Version:         v1.29.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4n2wg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-143651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-apiserver-no-preload-143651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-143651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-6txsz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-143651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-xwfpm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  9m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m31s)  kubelet          Node no-preload-143651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m31s)  kubelet          Node no-preload-143651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m31s)  kubelet          Node no-preload-143651 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-143651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-143651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-143651 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-143651 event: Registered Node no-preload-143651 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.079828] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.550482] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.627147] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154622] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000007] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.494588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.245976] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.134158] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.145679] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.128557] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.226448] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[Dec 5 20:52] systemd-fstab-generator[1317]: Ignoring "noauto" for root device
	[ +19.563493] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 5 20:56] systemd-fstab-generator[3935]: Ignoring "noauto" for root device
	[  +9.850746] systemd-fstab-generator[4264]: Ignoring "noauto" for root device
	[Dec 5 20:57] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2] <==
	* {"level":"info","ts":"2023-12-05T20:56:52.537168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 switched to configuration voters=(14901140351056629893)"}
	{"level":"info","ts":"2023-12-05T20:56:52.537354Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"eabf72ed03489de5","local-member-id":"cecb7c331cf85085","added-peer-id":"cecb7c331cf85085","added-peer-peer-urls":["https://192.168.61.162:2380"]}
	{"level":"info","ts":"2023-12-05T20:56:52.565849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T20:56:52.56596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T20:56:52.565985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 received MsgPreVoteResp from cecb7c331cf85085 at term 1"}
	{"level":"info","ts":"2023-12-05T20:56:52.565998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.566011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 received MsgVoteResp from cecb7c331cf85085 at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.566028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 became leader at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.566035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cecb7c331cf85085 elected leader cecb7c331cf85085 at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.567602Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cecb7c331cf85085","local-member-attributes":"{Name:no-preload-143651 ClientURLs:[https://192.168.61.162:2379]}","request-path":"/0/members/cecb7c331cf85085/attributes","cluster-id":"eabf72ed03489de5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:56:52.567817Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.575812Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eabf72ed03489de5","local-member-id":"cecb7c331cf85085","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.575951Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.576003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.583935Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-05T20:56:52.584864Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.162:2380"}
	{"level":"info","ts":"2023-12-05T20:56:52.584916Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.162:2380"}
	{"level":"info","ts":"2023-12-05T20:56:52.585547Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:56:52.589373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:56:52.592978Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"cecb7c331cf85085","initial-advertise-peer-urls":["https://192.168.61.162:2380"],"listen-peer-urls":["https://192.168.61.162:2380"],"advertise-client-urls":["https://192.168.61.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-05T20:56:52.59356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.162:2379"}
	{"level":"info","ts":"2023-12-05T20:56:52.595977Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:56:52.597501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:56:52.601809Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:56:52.601858Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:06:19 up 14 min,  0 users,  load average: 0.05, 0.25, 0.26
	Linux no-preload-143651 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc] <==
	* I1205 21:00:14.202122       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:01:54.864279       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:01:54.864448       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1205 21:01:55.865059       1 handler_proxy.go:93] no RequestInfo found in the context
	W1205 21:01:55.865059       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:01:55.865362       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:01:55.865371       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1205 21:01:55.865255       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:01:55.867473       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:02:55.866413       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:02:55.866726       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:02:55.866765       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:02:55.868717       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:02:55.868749       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:02:55.868757       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:04:55.867987       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:04:55.868189       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:04:55.868202       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:04:55.869177       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:04:55.869307       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:04:55.869348       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1] <==
	* I1205 21:00:41.902093       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:01:11.541580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:01:11.911249       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:01:41.548327       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:01:41.921984       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:02:11.555946       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:02:11.930443       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:02:41.562164       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:02:41.940918       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:03:11.568372       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:03:11.950903       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:03:18.833526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="241.818µs"
	I1205 21:03:33.829811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="255.633µs"
	E1205 21:03:41.574389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:03:41.960591       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:04:11.581609       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:04:11.970339       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:04:41.587340       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:04:41.979160       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:11.592614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:11.987747       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:41.599813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:41.999095       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:11.606936       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:12.013971       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739] <==
	* I1205 20:57:13.714293       1 server_others.go:72] "Using iptables proxy"
	I1205 20:57:13.744597       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.162"]
	I1205 20:57:14.715569       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:57:14.718202       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:57:14.718320       1 server_others.go:168] "Using iptables Proxier"
	I1205 20:57:14.732750       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:57:14.732944       1 server.go:865] "Version info" version="v1.29.0-rc.1"
	I1205 20:57:14.732956       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:57:14.735530       1 config.go:188] "Starting service config controller"
	I1205 20:57:14.735584       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:57:14.735605       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:57:14.735609       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:57:14.736265       1 config.go:315] "Starting node config controller"
	I1205 20:57:14.736305       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:57:14.836814       1 shared_informer.go:318] Caches are synced for node config
	I1205 20:57:14.836903       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:57:14.836913       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172] <==
	* W1205 20:56:54.913389       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:56:54.913442       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:56:54.913926       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:54.913973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:55.715552       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:56:55.715618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:56:55.813932       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:56:55.814047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 20:56:55.862360       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:55.862519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:55.898489       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:56:55.898598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 20:56:55.908328       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:55.908410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:55.962010       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:55.962194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:56.080019       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:56.080187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:56.156106       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:56:56.156277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 20:56:56.168790       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:56:56.168965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 20:56:56.433965       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:56:56.434115       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1205 20:56:59.281655       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:51:34 UTC, ends at Tue 2023-12-05 21:06:19 UTC. --
	Dec 05 21:03:33 no-preload-143651 kubelet[4271]: E1205 21:03:33.811640    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:03:44 no-preload-143651 kubelet[4271]: E1205 21:03:44.811530    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:03:56 no-preload-143651 kubelet[4271]: E1205 21:03:56.811331    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:03:58 no-preload-143651 kubelet[4271]: E1205 21:03:58.930952    4271 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:03:58 no-preload-143651 kubelet[4271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:03:58 no-preload-143651 kubelet[4271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:03:58 no-preload-143651 kubelet[4271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:04:11 no-preload-143651 kubelet[4271]: E1205 21:04:11.811608    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:04:23 no-preload-143651 kubelet[4271]: E1205 21:04:23.811520    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:04:35 no-preload-143651 kubelet[4271]: E1205 21:04:35.811121    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:04:50 no-preload-143651 kubelet[4271]: E1205 21:04:50.812907    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:04:58 no-preload-143651 kubelet[4271]: E1205 21:04:58.929816    4271 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:04:58 no-preload-143651 kubelet[4271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:04:58 no-preload-143651 kubelet[4271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:04:58 no-preload-143651 kubelet[4271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:05:03 no-preload-143651 kubelet[4271]: E1205 21:05:03.811469    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:05:16 no-preload-143651 kubelet[4271]: E1205 21:05:16.811626    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:05:31 no-preload-143651 kubelet[4271]: E1205 21:05:31.811308    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:05:46 no-preload-143651 kubelet[4271]: E1205 21:05:46.812913    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:05:58 no-preload-143651 kubelet[4271]: E1205 21:05:58.931939    4271 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:05:58 no-preload-143651 kubelet[4271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:05:58 no-preload-143651 kubelet[4271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:05:58 no-preload-143651 kubelet[4271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:06:01 no-preload-143651 kubelet[4271]: E1205 21:06:01.811218    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:06:15 no-preload-143651 kubelet[4271]: E1205 21:06:15.812336    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	
	* 
	* ==> storage-provisioner [608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf] <==
	* I1205 20:57:15.310427       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:57:15.328066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:57:15.328254       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:57:15.341635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:57:15.341871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc8db603-0517-40d4-ba16-6f4b1b6d55f1", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-143651_893a5e49-eb0b-475c-9d51-4ca0924c3fe6 became leader
	I1205 20:57:15.342550       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-143651_893a5e49-eb0b-475c-9d51-4ca0924c3fe6!
	I1205 20:57:15.443764       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-143651_893a5e49-eb0b-475c-9d51-4ca0924c3fe6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143651 -n no-preload-143651
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-143651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xwfpm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-143651 describe pod metrics-server-57f55c9bc5-xwfpm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-143651 describe pod metrics-server-57f55c9bc5-xwfpm: exit status 1 (76.479304ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xwfpm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-143651 describe pod metrics-server-57f55c9bc5-xwfpm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-331495 -n embed-certs-331495
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:06:50.593019117 +0000 UTC m=+5523.405478742
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-331495 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-331495 logs -n 25: (1.634885859s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-405510                                        | pause-405510                 | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-601680                              | stopped-upgrade-601680       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:49:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:49:16.268811   47365 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:49:16.269102   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269113   47365 out.go:309] Setting ErrFile to fd 2...
	I1205 20:49:16.269117   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269306   47365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:49:16.269873   47365 out.go:303] Setting JSON to false
	I1205 20:49:16.270847   47365 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5509,"bootTime":1701803847,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:49:16.270909   47365 start.go:138] virtualization: kvm guest
	I1205 20:49:16.273160   47365 out.go:177] * [default-k8s-diff-port-463614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:49:16.275265   47365 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:49:16.275288   47365 notify.go:220] Checking for updates...
	I1205 20:49:16.276797   47365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:49:16.278334   47365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:49:16.279902   47365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:49:16.281580   47365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:49:16.283168   47365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:49:16.285134   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:49:16.285533   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.285605   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.300209   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I1205 20:49:16.300585   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.301134   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.301159   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.301488   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.301644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.301873   47365 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:49:16.302164   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.302215   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.317130   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:49:16.317591   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.318064   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.318086   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.318475   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.318691   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.356580   47365 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:49:16.358350   47365 start.go:298] selected driver: kvm2
	I1205 20:49:16.358368   47365 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.358501   47365 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:49:16.359194   47365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.359276   47365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:49:16.374505   47365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:49:16.374939   47365 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:49:16.374999   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:49:16.375009   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:49:16.375022   47365 start_flags.go:323] config:
	{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-46361
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.375188   47365 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.377202   47365 out.go:177] * Starting control plane node default-k8s-diff-port-463614 in cluster default-k8s-diff-port-463614
	I1205 20:49:16.338499   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:19.410522   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:16.379191   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:49:16.379245   47365 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:49:16.379253   47365 cache.go:56] Caching tarball of preloaded images
	I1205 20:49:16.379352   47365 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:49:16.379364   47365 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:49:16.379500   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:49:16.379715   47365 start.go:365] acquiring machines lock for default-k8s-diff-port-463614: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:49:25.490576   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:28.562621   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:34.642596   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:37.714630   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:43.794573   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:46.866618   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:52.946521   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:56.018552   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:02.098566   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:05.170641   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:11.250570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:14.322507   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:20.402570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:23.474581   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:29.554568   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:32.626541   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:38.706589   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:41.778594   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:47.858626   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:50.930560   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:57.010496   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:00.082587   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:03.086325   46700 start.go:369] acquired machines lock for "old-k8s-version-061206" in 4m14.42699626s
	I1205 20:51:03.086377   46700 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:03.086392   46700 fix.go:54] fixHost starting: 
	I1205 20:51:03.086799   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:03.086835   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:03.101342   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1205 20:51:03.101867   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:03.102378   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:51:03.102403   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:03.102792   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:03.103003   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:03.103208   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:51:03.104894   46700 fix.go:102] recreateIfNeeded on old-k8s-version-061206: state=Stopped err=<nil>
	I1205 20:51:03.104914   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	W1205 20:51:03.105115   46700 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:03.106835   46700 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-061206" ...
	I1205 20:51:03.108621   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Start
	I1205 20:51:03.108840   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring networks are active...
	I1205 20:51:03.109627   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network default is active
	I1205 20:51:03.110007   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network mk-old-k8s-version-061206 is active
	I1205 20:51:03.110401   46700 main.go:141] libmachine: (old-k8s-version-061206) Getting domain xml...
	I1205 20:51:03.111358   46700 main.go:141] libmachine: (old-k8s-version-061206) Creating domain...
	I1205 20:51:03.084237   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:03.084288   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:51:03.086163   46374 machine.go:91] provisioned docker machine in 4m37.408875031s
	I1205 20:51:03.086199   46374 fix.go:56] fixHost completed within 4m37.430079633s
	I1205 20:51:03.086204   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 4m37.430101514s
	W1205 20:51:03.086231   46374 start.go:694] error starting host: provision: host is not running
	W1205 20:51:03.086344   46374 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:51:03.086356   46374 start.go:709] Will try again in 5 seconds ...
	I1205 20:51:04.367947   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting to get IP...
	I1205 20:51:04.368825   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.369277   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.369387   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.369246   47662 retry.go:31] will retry after 251.730796ms: waiting for machine to come up
	I1205 20:51:04.622984   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.623402   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.623431   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.623354   47662 retry.go:31] will retry after 383.862516ms: waiting for machine to come up
	I1205 20:51:05.008944   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.009308   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.009336   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.009237   47662 retry.go:31] will retry after 412.348365ms: waiting for machine to come up
	I1205 20:51:05.422846   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.423235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.423253   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.423198   47662 retry.go:31] will retry after 568.45875ms: waiting for machine to come up
	I1205 20:51:05.992882   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.993236   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.993264   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.993182   47662 retry.go:31] will retry after 494.410091ms: waiting for machine to come up
	I1205 20:51:06.488852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:06.489210   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:06.489235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:06.489151   47662 retry.go:31] will retry after 640.351521ms: waiting for machine to come up
	I1205 20:51:07.130869   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:07.131329   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:07.131355   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:07.131273   47662 retry.go:31] will retry after 1.164209589s: waiting for machine to come up
	I1205 20:51:08.296903   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:08.297333   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:08.297365   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:08.297280   47662 retry.go:31] will retry after 1.479760715s: waiting for machine to come up
	I1205 20:51:08.087457   46374 start.go:365] acquiring machines lock for embed-certs-331495: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:09.778949   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:09.779414   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:09.779435   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:09.779379   47662 retry.go:31] will retry after 1.577524888s: waiting for machine to come up
	I1205 20:51:11.359094   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:11.359468   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:11.359499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:11.359405   47662 retry.go:31] will retry after 1.742003001s: waiting for machine to come up
	I1205 20:51:13.103927   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:13.104416   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:13.104446   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:13.104365   47662 retry.go:31] will retry after 2.671355884s: waiting for machine to come up
	I1205 20:51:15.777050   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:15.777542   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:15.777573   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:15.777491   47662 retry.go:31] will retry after 2.435682478s: waiting for machine to come up
	I1205 20:51:18.214485   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:18.214943   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:18.214965   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:18.214920   47662 retry.go:31] will retry after 2.827460605s: waiting for machine to come up
	I1205 20:51:22.191314   46866 start.go:369] acquired machines lock for "no-preload-143651" in 4m16.377152417s
	I1205 20:51:22.191373   46866 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:22.191380   46866 fix.go:54] fixHost starting: 
	I1205 20:51:22.191764   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:22.191801   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:22.208492   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I1205 20:51:22.208882   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:22.209423   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:51:22.209448   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:22.209839   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:22.210041   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:22.210202   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:51:22.211737   46866 fix.go:102] recreateIfNeeded on no-preload-143651: state=Stopped err=<nil>
	I1205 20:51:22.211762   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	W1205 20:51:22.211960   46866 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:22.214319   46866 out.go:177] * Restarting existing kvm2 VM for "no-preload-143651" ...
	I1205 20:51:21.044392   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044931   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has current primary IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044953   46700 main.go:141] libmachine: (old-k8s-version-061206) Found IP for machine: 192.168.50.116
	I1205 20:51:21.044964   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserving static IP address...
	I1205 20:51:21.045337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.045357   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserved static IP address: 192.168.50.116
	I1205 20:51:21.045371   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | skip adding static IP to network mk-old-k8s-version-061206 - found existing host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"}
	I1205 20:51:21.045381   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting for SSH to be available...
	I1205 20:51:21.045398   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Getting to WaitForSSH function...
	I1205 20:51:21.047343   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047678   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.047719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047758   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH client type: external
	I1205 20:51:21.047789   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa (-rw-------)
	I1205 20:51:21.047817   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:21.047832   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | About to run SSH command:
	I1205 20:51:21.047841   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | exit 0
	I1205 20:51:21.134741   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:21.135100   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetConfigRaw
	I1205 20:51:21.135770   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.138325   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138656   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.138689   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138908   46700 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:51:21.139128   46700 machine.go:88] provisioning docker machine ...
	I1205 20:51:21.139147   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.139351   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139516   46700 buildroot.go:166] provisioning hostname "old-k8s-version-061206"
	I1205 20:51:21.139534   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139714   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.141792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142136   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.142163   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142294   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.142471   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142609   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142741   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.142868   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.143244   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.143264   46700 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061206 && echo "old-k8s-version-061206" | sudo tee /etc/hostname
	I1205 20:51:21.267170   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061206
	
	I1205 20:51:21.267193   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.270042   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270524   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.270556   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270749   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.270945   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271115   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.271407   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.271735   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.271752   46700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061206/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:21.391935   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:21.391959   46700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:21.391983   46700 buildroot.go:174] setting up certificates
	I1205 20:51:21.391994   46700 provision.go:83] configureAuth start
	I1205 20:51:21.392002   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.392264   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.395020   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.395375   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395517   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.397499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397760   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.397792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397937   46700 provision.go:138] copyHostCerts
	I1205 20:51:21.397994   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:21.398007   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:21.398090   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:21.398222   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:21.398234   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:21.398293   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:21.398383   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:21.398394   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:21.398432   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:21.398499   46700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061206 san=[192.168.50.116 192.168.50.116 localhost 127.0.0.1 minikube old-k8s-version-061206]
	I1205 20:51:21.465637   46700 provision.go:172] copyRemoteCerts
	I1205 20:51:21.465701   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:21.465737   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.468386   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468688   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.468719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468896   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.469092   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.469232   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.469349   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:21.555915   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:21.578545   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:51:21.603058   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:21.624769   46700 provision.go:86] duration metric: configureAuth took 232.761874ms
	I1205 20:51:21.624798   46700 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:21.624972   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:51:21.625065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.627589   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.627953   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.627991   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.628085   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.628300   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628477   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628643   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.628867   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.629237   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.629262   46700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:21.945366   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:21.945398   46700 machine.go:91] provisioned docker machine in 806.257704ms
	I1205 20:51:21.945410   46700 start.go:300] post-start starting for "old-k8s-version-061206" (driver="kvm2")
	I1205 20:51:21.945423   46700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:21.945442   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.945803   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:21.945833   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.948699   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949083   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.949116   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949247   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.949455   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.949642   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.949780   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.036694   46700 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:22.040857   46700 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:22.040887   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:22.040961   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:22.041067   46700 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:22.041167   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:22.050610   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:22.072598   46700 start.go:303] post-start completed in 127.17514ms
	I1205 20:51:22.072621   46700 fix.go:56] fixHost completed within 18.986227859s
	I1205 20:51:22.072650   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.075382   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.075779   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.075809   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.076014   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.076218   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076390   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076548   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.076677   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:22.076979   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:22.076989   46700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:22.191127   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809482.140720971
	
	I1205 20:51:22.191150   46700 fix.go:206] guest clock: 1701809482.140720971
	I1205 20:51:22.191160   46700 fix.go:219] Guest: 2023-12-05 20:51:22.140720971 +0000 UTC Remote: 2023-12-05 20:51:22.072625275 +0000 UTC m=+273.566123117 (delta=68.095696ms)
	I1205 20:51:22.191206   46700 fix.go:190] guest clock delta is within tolerance: 68.095696ms
	I1205 20:51:22.191211   46700 start.go:83] releasing machines lock for "old-k8s-version-061206", held for 19.104851926s
	I1205 20:51:22.191239   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.191530   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:22.194285   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194676   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.194721   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194832   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195352   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195535   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195614   46700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:22.195660   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.195729   46700 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:22.195759   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.198085   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198438   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198493   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198522   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198619   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.198813   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.198893   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198922   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198980   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.199139   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.199172   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.199274   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199426   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.284598   46700 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:22.304917   46700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:22.454449   46700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:22.461344   46700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:22.461409   46700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:22.483106   46700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:22.483130   46700 start.go:475] detecting cgroup driver to use...
	I1205 20:51:22.483202   46700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:22.498157   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:22.510661   46700 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:22.510712   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:22.525004   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:22.538499   46700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:22.652874   46700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:22.787215   46700 docker.go:219] disabling docker service ...
	I1205 20:51:22.787272   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:22.800315   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:22.812031   46700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:22.926202   46700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:23.057043   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:23.072205   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:23.092858   46700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:51:23.092916   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.103613   46700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:23.103680   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.113992   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.124132   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.134007   46700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:23.144404   46700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:23.153679   46700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:23.153735   46700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:23.167935   46700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:23.178944   46700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:23.294314   46700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:23.469887   46700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:23.469957   46700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:23.475308   46700 start.go:543] Will wait 60s for crictl version
	I1205 20:51:23.475384   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:23.479436   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:23.520140   46700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:23.520223   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.572184   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.619296   46700 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1205 20:51:22.215866   46866 main.go:141] libmachine: (no-preload-143651) Calling .Start
	I1205 20:51:22.216026   46866 main.go:141] libmachine: (no-preload-143651) Ensuring networks are active...
	I1205 20:51:22.216719   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network default is active
	I1205 20:51:22.217060   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network mk-no-preload-143651 is active
	I1205 20:51:22.217553   46866 main.go:141] libmachine: (no-preload-143651) Getting domain xml...
	I1205 20:51:22.218160   46866 main.go:141] libmachine: (no-preload-143651) Creating domain...
	I1205 20:51:23.560327   46866 main.go:141] libmachine: (no-preload-143651) Waiting to get IP...
	I1205 20:51:23.561191   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.561601   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.561675   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.561566   47785 retry.go:31] will retry after 269.644015ms: waiting for machine to come up
	I1205 20:51:23.833089   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.833656   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.833695   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.833612   47785 retry.go:31] will retry after 363.018928ms: waiting for machine to come up
	I1205 20:51:24.198250   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.198767   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.198797   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.198717   47785 retry.go:31] will retry after 464.135158ms: waiting for machine to come up
	I1205 20:51:24.664518   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.664945   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.664970   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.664902   47785 retry.go:31] will retry after 383.704385ms: waiting for machine to come up
	I1205 20:51:25.050654   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.051112   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.051142   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.051078   47785 retry.go:31] will retry after 620.614799ms: waiting for machine to come up
	I1205 20:51:25.672997   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.673452   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.673485   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.673394   47785 retry.go:31] will retry after 594.447783ms: waiting for machine to come up
	I1205 20:51:23.620743   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:23.623372   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623672   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:23.623702   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623934   46700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:23.628382   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:23.642698   46700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 20:51:23.642770   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:23.686679   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:23.686776   46700 ssh_runner.go:195] Run: which lz4
	I1205 20:51:23.690994   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:51:23.695445   46700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:51:23.695480   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1205 20:51:25.519961   46700 crio.go:444] Took 1.828999 seconds to copy over tarball
	I1205 20:51:25.520052   46700 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:51:28.545261   46700 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025151809s)
	I1205 20:51:28.545291   46700 crio.go:451] Took 3.025302 seconds to extract the tarball
	I1205 20:51:28.545303   46700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:51:26.269269   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:26.269771   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:26.269815   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:26.269741   47785 retry.go:31] will retry after 872.968768ms: waiting for machine to come up
	I1205 20:51:27.144028   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:27.144505   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:27.144538   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:27.144467   47785 retry.go:31] will retry after 1.067988446s: waiting for machine to come up
	I1205 20:51:28.213709   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:28.214161   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:28.214184   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:28.214111   47785 retry.go:31] will retry after 1.483033238s: waiting for machine to come up
	I1205 20:51:29.699402   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:29.699928   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:29.699973   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:29.699861   47785 retry.go:31] will retry after 1.985034944s: waiting for machine to come up
	I1205 20:51:28.586059   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:28.631610   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:28.631643   46700 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:28.631749   46700 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.631797   46700 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.631754   46700 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.631937   46700 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.632007   46700 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:51:28.631930   46700 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.632029   46700 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.631760   46700 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633385   46700 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633397   46700 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:51:28.633416   46700 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.633494   46700 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.633496   46700 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.633512   46700 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.633518   46700 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.633497   46700 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.789873   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.811118   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.811610   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.818440   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.818470   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1205 20:51:28.820473   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.849060   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.855915   46700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1205 20:51:28.855966   46700 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.856023   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953211   46700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1205 20:51:28.953261   46700 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.953289   46700 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1205 20:51:28.953315   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953325   46700 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.953363   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.968680   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.992735   46700 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1205 20:51:28.992781   46700 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1205 20:51:28.992825   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992847   46700 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1205 20:51:28.992878   46700 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.992907   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992917   46700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1205 20:51:28.992830   46700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1205 20:51:28.992948   46700 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.992980   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.992994   46700 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.993009   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.993029   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992944   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.993064   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:29.193946   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:29.194040   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1205 20:51:29.194095   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1205 20:51:29.194188   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1205 20:51:29.194217   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1205 20:51:29.194257   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:29.194279   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1205 20:51:29.299767   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:51:29.299772   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1205 20:51:29.299836   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1205 20:51:29.299855   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1205 20:51:29.299870   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.304934   46700 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1205 20:51:29.304952   46700 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.305004   46700 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1205 20:51:31.467263   46700 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.162226207s)
	I1205 20:51:31.467295   46700 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1205 20:51:31.467342   46700 cache_images.go:92] LoadImages completed in 2.835682781s
	W1205 20:51:31.467425   46700 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1205 20:51:31.467515   46700 ssh_runner.go:195] Run: crio config
	I1205 20:51:31.527943   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:31.527968   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:31.527989   46700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:51:31.528016   46700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061206 NodeName:old-k8s-version-061206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:51:31.528162   46700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-061206"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-061206
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.116:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:51:31.528265   46700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-061206 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:51:31.528332   46700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1205 20:51:31.538013   46700 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:51:31.538090   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:51:31.547209   46700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:51:31.565720   46700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:51:31.582290   46700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1205 20:51:31.599081   46700 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I1205 20:51:31.603007   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:31.615348   46700 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206 for IP: 192.168.50.116
	I1205 20:51:31.615385   46700 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:31.615582   46700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:51:31.615657   46700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:51:31.615757   46700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.key
	I1205 20:51:31.615846   46700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key.ae4cb88a
	I1205 20:51:31.615902   46700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key
	I1205 20:51:31.616079   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:51:31.616150   46700 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:51:31.616172   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:51:31.616216   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:51:31.616261   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:51:31.616302   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:51:31.616375   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:31.617289   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:51:31.645485   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:51:31.675015   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:51:31.699520   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:51:31.727871   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:51:31.751623   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:51:31.776679   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:51:31.799577   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:51:31.827218   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:51:31.849104   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:51:31.870931   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:51:31.894940   46700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:51:31.912233   46700 ssh_runner.go:195] Run: openssl version
	I1205 20:51:31.918141   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:51:31.928422   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932915   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932985   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.938327   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:51:31.948580   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:51:31.958710   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963091   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963155   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.968667   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:51:31.981987   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:51:31.995793   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001622   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001709   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.008883   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:51:32.021378   46700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:51:32.025902   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:51:32.031917   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:51:32.037649   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:51:32.043121   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:51:32.048806   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:51:32.054266   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:51:32.060014   46700 kubeadm.go:404] StartCluster: {Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:51:32.060131   46700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:51:32.060186   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:32.101244   46700 cri.go:89] found id: ""
	I1205 20:51:32.101317   46700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:51:32.111900   46700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:51:32.111925   46700 kubeadm.go:636] restartCluster start
	I1205 20:51:32.111989   46700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:51:32.121046   46700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.122654   46700 kubeconfig.go:92] found "old-k8s-version-061206" server: "https://192.168.50.116:8443"
	I1205 20:51:32.126231   46700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:51:32.135341   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.135404   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.147308   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.147325   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.147367   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.158453   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.659254   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.659357   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.672490   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:33.159599   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.159693   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.171948   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:31.688072   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:31.688591   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:31.688627   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:31.688516   47785 retry.go:31] will retry after 1.83172898s: waiting for machine to come up
	I1205 20:51:33.521647   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:33.522137   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:33.522167   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:33.522083   47785 retry.go:31] will retry after 3.41334501s: waiting for machine to come up
	I1205 20:51:33.659273   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.659359   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.675427   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.158981   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.159075   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.173025   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.659439   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.659547   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.672184   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.159408   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.159472   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.173149   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.659490   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.659626   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.673261   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.159480   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.159569   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.172185   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.659417   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.659528   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.675853   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.159404   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.159495   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.172824   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.659361   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.659456   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.671599   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:38.158754   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.158834   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.171170   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.939441   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:36.939880   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:36.939905   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:36.939843   47785 retry.go:31] will retry after 3.715659301s: waiting for machine to come up
	I1205 20:51:40.659432   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659901   46866 main.go:141] libmachine: (no-preload-143651) Found IP for machine: 192.168.61.162
	I1205 20:51:40.659937   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has current primary IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659973   46866 main.go:141] libmachine: (no-preload-143651) Reserving static IP address...
	I1205 20:51:40.660324   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.660352   46866 main.go:141] libmachine: (no-preload-143651) Reserved static IP address: 192.168.61.162
	I1205 20:51:40.660372   46866 main.go:141] libmachine: (no-preload-143651) DBG | skip adding static IP to network mk-no-preload-143651 - found existing host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"}
	I1205 20:51:40.660391   46866 main.go:141] libmachine: (no-preload-143651) DBG | Getting to WaitForSSH function...
	I1205 20:51:40.660407   46866 main.go:141] libmachine: (no-preload-143651) Waiting for SSH to be available...
	I1205 20:51:40.662619   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663014   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.663042   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663226   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH client type: external
	I1205 20:51:40.663257   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa (-rw-------)
	I1205 20:51:40.663293   46866 main.go:141] libmachine: (no-preload-143651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:40.663312   46866 main.go:141] libmachine: (no-preload-143651) DBG | About to run SSH command:
	I1205 20:51:40.663328   46866 main.go:141] libmachine: (no-preload-143651) DBG | exit 0
	I1205 20:51:41.891099   47365 start.go:369] acquired machines lock for "default-k8s-diff-port-463614" in 2m25.511348838s
	I1205 20:51:41.891167   47365 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:41.891179   47365 fix.go:54] fixHost starting: 
	I1205 20:51:41.891625   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:41.891666   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:41.910556   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1205 20:51:41.910956   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:41.911447   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:51:41.911474   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:41.911792   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:41.912020   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:51:41.912168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:51:41.913796   47365 fix.go:102] recreateIfNeeded on default-k8s-diff-port-463614: state=Stopped err=<nil>
	I1205 20:51:41.913824   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	W1205 20:51:41.914032   47365 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:41.916597   47365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-463614" ...
	I1205 20:51:40.754683   46866 main.go:141] libmachine: (no-preload-143651) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:40.755055   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetConfigRaw
	I1205 20:51:40.755663   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:40.758165   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758502   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.758534   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758722   46866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:51:40.758916   46866 machine.go:88] provisioning docker machine ...
	I1205 20:51:40.758933   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:40.759160   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759358   46866 buildroot.go:166] provisioning hostname "no-preload-143651"
	I1205 20:51:40.759384   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759555   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.762125   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762513   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.762546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762688   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.762894   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763070   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763211   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.763392   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.763747   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.763761   46866 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143651 && echo "no-preload-143651" | sudo tee /etc/hostname
	I1205 20:51:40.895095   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143651
	
	I1205 20:51:40.895123   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.897864   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898199   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.898236   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898419   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.898629   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898814   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898972   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.899147   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.899454   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.899472   46866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143651/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:41.027721   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:41.027758   46866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:41.027802   46866 buildroot.go:174] setting up certificates
	I1205 20:51:41.027813   46866 provision.go:83] configureAuth start
	I1205 20:51:41.027827   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:41.028120   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.031205   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031561   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.031592   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031715   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.034163   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034531   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.034563   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034697   46866 provision.go:138] copyHostCerts
	I1205 20:51:41.034750   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:41.034767   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:41.034826   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:41.034918   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:41.034925   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:41.034947   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:41.035018   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:41.035029   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:41.035056   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:41.035129   46866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.no-preload-143651 san=[192.168.61.162 192.168.61.162 localhost 127.0.0.1 minikube no-preload-143651]
	I1205 20:51:41.152743   46866 provision.go:172] copyRemoteCerts
	I1205 20:51:41.152808   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:41.152836   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.155830   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156153   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.156181   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156380   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.156587   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.156769   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.156914   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.247182   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 20:51:41.271756   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:41.296485   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:41.317870   46866 provision.go:86] duration metric: configureAuth took 290.041804ms
	I1205 20:51:41.317900   46866 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:41.318059   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:51:41.318130   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.320631   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.320907   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.320935   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.321099   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.321310   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321436   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321558   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.321671   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.321981   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.321998   46866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:41.637500   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:41.637536   46866 machine.go:91] provisioned docker machine in 878.607379ms
	I1205 20:51:41.637551   46866 start.go:300] post-start starting for "no-preload-143651" (driver="kvm2")
	I1205 20:51:41.637565   46866 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:41.637586   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.637928   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:41.637959   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.640546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.640941   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.640969   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.641158   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.641348   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.641521   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.641701   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.733255   46866 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:41.737558   46866 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:41.737582   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:41.737656   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:41.737747   46866 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:41.737867   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:41.747400   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:41.769318   46866 start.go:303] post-start completed in 131.753103ms
	I1205 20:51:41.769341   46866 fix.go:56] fixHost completed within 19.577961747s
	I1205 20:51:41.769360   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.772098   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772433   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.772469   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772614   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.772830   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773000   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773141   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.773329   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.773689   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.773701   46866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:41.890932   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809501.865042950
	
	I1205 20:51:41.890965   46866 fix.go:206] guest clock: 1701809501.865042950
	I1205 20:51:41.890977   46866 fix.go:219] Guest: 2023-12-05 20:51:41.86504295 +0000 UTC Remote: 2023-12-05 20:51:41.769344785 +0000 UTC m=+276.111345943 (delta=95.698165ms)
	I1205 20:51:41.891000   46866 fix.go:190] guest clock delta is within tolerance: 95.698165ms
	I1205 20:51:41.891005   46866 start.go:83] releasing machines lock for "no-preload-143651", held for 19.699651094s
	I1205 20:51:41.891037   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.891349   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.893760   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894151   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.894188   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894393   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.894953   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895147   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895233   46866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:41.895275   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.895379   46866 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:41.895409   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.897961   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898107   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898353   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898396   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898610   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898663   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898693   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898781   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.898835   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.899138   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.899149   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.899296   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.987662   46866 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:42.008983   46866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:42.150028   46866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:42.156643   46866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:42.156719   46866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:42.175508   46866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:42.175534   46866 start.go:475] detecting cgroup driver to use...
	I1205 20:51:42.175620   46866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:42.189808   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:42.202280   46866 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:42.202342   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:42.220906   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:42.238796   46866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:42.364162   46866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:42.493990   46866 docker.go:219] disabling docker service ...
	I1205 20:51:42.494066   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:42.507419   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:42.519769   46866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:42.639608   46866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:42.764015   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:42.776984   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:42.797245   46866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:51:42.797307   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.807067   46866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:42.807150   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.816699   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.825896   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.835144   46866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:42.844910   46866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:42.853054   46866 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:42.853127   46866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:42.865162   46866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:42.874929   46866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:42.989397   46866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:43.173537   46866 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:43.173613   46866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:43.179392   46866 start.go:543] Will wait 60s for crictl version
	I1205 20:51:43.179449   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.183693   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:43.233790   46866 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:43.233862   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.291711   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.343431   46866 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1205 20:51:38.658807   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.658875   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.672580   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.159258   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.159363   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.172800   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.659451   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.659544   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.673718   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.159346   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.159436   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.172524   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.659093   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.659170   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.671848   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.159453   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.159534   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.171845   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.659456   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.659520   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.671136   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:42.136008   46700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:51:42.136039   46700 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:51:42.136049   46700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:51:42.136130   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:42.183279   46700 cri.go:89] found id: ""
	I1205 20:51:42.183375   46700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:51:42.202550   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:51:42.213978   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:51:42.214041   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223907   46700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223932   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:42.349280   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.257422   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.483371   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.345205   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:43.348398   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348738   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:43.348769   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348965   46866 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:43.354536   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:43.368512   46866 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 20:51:43.368550   46866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:43.411924   46866 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1205 20:51:43.411956   46866 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:43.412050   46866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.412030   46866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.412084   46866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.412097   46866 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1205 20:51:43.412134   46866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.412072   46866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.412021   46866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.412056   46866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413334   46866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.413403   46866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413481   46866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.413539   46866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.413554   46866 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1205 20:51:43.413337   46866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.413624   46866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.413405   46866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.563942   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.565063   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.567071   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.572782   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.577279   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.579820   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1205 20:51:43.591043   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.735723   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.735988   46866 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1205 20:51:43.736032   46866 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.736073   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.791375   46866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1205 20:51:43.791424   46866 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.791473   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.810236   46866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1205 20:51:43.810290   46866 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.810339   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841046   46866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1205 20:51:43.841255   46866 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.841347   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841121   46866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1205 20:51:43.841565   46866 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.841635   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866289   46866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1205 20:51:43.866344   46866 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.866368   46866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:51:43.866390   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866417   46866 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.866465   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866469   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.866597   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.866685   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.866780   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.866853   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.994581   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994691   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994757   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1205 20:51:43.994711   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.994792   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.994849   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:44.000411   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.000501   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.008960   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1205 20:51:44.009001   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:44.073217   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073238   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073275   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1205 20:51:44.073282   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073304   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073376   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:51:44.073397   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073439   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073444   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:44.073471   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1205 20:51:44.073504   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1205 20:51:41.918223   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Start
	I1205 20:51:41.918414   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring networks are active...
	I1205 20:51:41.919085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network default is active
	I1205 20:51:41.919401   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network mk-default-k8s-diff-port-463614 is active
	I1205 20:51:41.919733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Getting domain xml...
	I1205 20:51:41.920368   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Creating domain...
	I1205 20:51:43.304717   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting to get IP...
	I1205 20:51:43.305837   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.306202   47900 retry.go:31] will retry after 208.55347ms: waiting for machine to come up
	I1205 20:51:43.516782   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517269   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517297   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.517232   47900 retry.go:31] will retry after 370.217439ms: waiting for machine to come up
	I1205 20:51:43.889085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889580   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889615   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.889531   47900 retry.go:31] will retry after 395.420735ms: waiting for machine to come up
	I1205 20:51:44.286007   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286563   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.286481   47900 retry.go:31] will retry after 437.496548ms: waiting for machine to come up
	I1205 20:51:44.726145   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726803   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726850   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.726748   47900 retry.go:31] will retry after 628.791518ms: waiting for machine to come up
	I1205 20:51:45.357823   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358285   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:45.358232   47900 retry.go:31] will retry after 661.164562ms: waiting for machine to come up
	I1205 20:51:46.021711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022151   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022177   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:46.022120   47900 retry.go:31] will retry after 1.093521736s: waiting for machine to come up
	I1205 20:51:43.607841   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.765000   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:51:43.765097   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:43.776916   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.306400   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.805894   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.305832   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.332834   46700 api_server.go:72] duration metric: took 1.567832932s to wait for apiserver process to appear ...
	I1205 20:51:45.332867   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:51:45.332884   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:46.537183   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.463870183s)
	I1205 20:51:46.537256   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1205 20:51:46.537311   46866 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:46.537336   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.46384231s)
	I1205 20:51:46.537260   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.463842778s)
	I1205 20:51:46.537373   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:51:46.537394   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1205 20:51:46.537411   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:50.326248   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.788789868s)
	I1205 20:51:50.326299   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1205 20:51:50.326337   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:50.326419   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:47.117386   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117831   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:47.117800   47900 retry.go:31] will retry after 1.255113027s: waiting for machine to come up
	I1205 20:51:48.375199   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375692   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:48.375655   47900 retry.go:31] will retry after 1.65255216s: waiting for machine to come up
	I1205 20:51:50.029505   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029904   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029933   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:50.029860   47900 retry.go:31] will retry after 2.072960988s: waiting for machine to come up
	I1205 20:51:50.334417   46700 api_server.go:269] stopped: https://192.168.50.116:8443/healthz: Get "https://192.168.50.116:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:51:50.334459   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.286979   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:51:52.287013   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:51:52.787498   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.871766   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:52.871803   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.287974   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.301921   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:53.301962   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.787781   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.799426   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:51:53.809064   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:51:53.809101   46700 api_server.go:131] duration metric: took 8.476226007s to wait for apiserver health ...
	I1205 20:51:53.809112   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:53.809120   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:53.811188   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:51:53.496825   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.170377466s)
	I1205 20:51:53.496856   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1205 20:51:53.496877   46866 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:53.496925   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:55.657835   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.160865472s)
	I1205 20:51:55.657869   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1205 20:51:55.657898   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:55.657955   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:52.104758   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105274   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105301   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:52.105232   47900 retry.go:31] will retry after 2.172151449s: waiting for machine to come up
	I1205 20:51:54.279576   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280091   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:54.280054   47900 retry.go:31] will retry after 3.042324499s: waiting for machine to come up
	I1205 20:51:53.812841   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:51:53.835912   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:51:53.920892   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:51:53.943982   46700 system_pods.go:59] 7 kube-system pods found
	I1205 20:51:53.944026   46700 system_pods.go:61] "coredns-5644d7b6d9-kqhgk" [473e53e3-a0bd-4dcb-88c1-d61e9cc3e686] Running
	I1205 20:51:53.944034   46700 system_pods.go:61] "etcd-old-k8s-version-061206" [a2a6a459-41a3-49e3-b32e-a091317390ea] Running
	I1205 20:51:53.944041   46700 system_pods.go:61] "kube-apiserver-old-k8s-version-061206" [9cf24995-fccb-47e4-8d4a-870198b7c82f] Running
	I1205 20:51:53.944054   46700 system_pods.go:61] "kube-controller-manager-old-k8s-version-061206" [225a4a8b-2b6e-46f4-8bd9-9a375b05c23c] Pending
	I1205 20:51:53.944061   46700 system_pods.go:61] "kube-proxy-r5n6g" [5db8876d-ecff-40b3-a61d-aeaf7870166c] Running
	I1205 20:51:53.944068   46700 system_pods.go:61] "kube-scheduler-old-k8s-version-061206" [de56d925-45b3-4c36-b2c2-c90938793aa2] Running
	I1205 20:51:53.944075   46700 system_pods.go:61] "storage-provisioner" [d5d57d93-f94b-4a3e-8c65-25cd4d71b9d5] Running
	I1205 20:51:53.944083   46700 system_pods.go:74] duration metric: took 23.165628ms to wait for pod list to return data ...
	I1205 20:51:53.944093   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:51:53.956907   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:51:53.956949   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:51:53.956964   46700 node_conditions.go:105] duration metric: took 12.864098ms to run NodePressure ...
	I1205 20:51:53.956986   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:54.482145   46700 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:51:54.492629   46700 retry.go:31] will retry after 326.419845ms: kubelet not initialised
	I1205 20:51:54.826701   46700 retry.go:31] will retry after 396.475289ms: kubelet not initialised
	I1205 20:51:55.228971   46700 retry.go:31] will retry after 752.153604ms: kubelet not initialised
	I1205 20:51:55.987713   46700 retry.go:31] will retry after 881.822561ms: kubelet not initialised
	I1205 20:51:56.877407   46700 retry.go:31] will retry after 824.757816ms: kubelet not initialised
	I1205 20:51:57.707927   46700 retry.go:31] will retry after 2.392241385s: kubelet not initialised
	I1205 20:51:58.643374   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.985387711s)
	I1205 20:51:58.643408   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1205 20:51:58.643434   46866 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:58.643500   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:59.407245   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:51:59.407282   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:59.407333   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:57.324016   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324534   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324565   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:57.324482   47900 retry.go:31] will retry after 3.449667479s: waiting for machine to come up
	I1205 20:52:00.776644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777141   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Found IP for machine: 192.168.39.27
	I1205 20:52:00.777175   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777186   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserving static IP address...
	I1205 20:52:00.777825   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserved static IP address: 192.168.39.27
	I1205 20:52:00.777878   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.777892   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for SSH to be available...
	I1205 20:52:00.777918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | skip adding static IP to network mk-default-k8s-diff-port-463614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"}
	I1205 20:52:00.777929   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Getting to WaitForSSH function...
	I1205 20:52:00.780317   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.780729   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH client type: external
	I1205 20:52:00.780909   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa (-rw-------)
	I1205 20:52:00.780940   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:00.780959   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | About to run SSH command:
	I1205 20:52:00.780980   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | exit 0
	I1205 20:52:00.922857   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:00.923204   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetConfigRaw
	I1205 20:52:00.923973   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:00.927405   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.927885   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.927918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.928217   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:52:00.928469   47365 machine.go:88] provisioning docker machine ...
	I1205 20:52:00.928497   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:00.928735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.928912   47365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-463614"
	I1205 20:52:00.928938   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.929092   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:00.931664   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932096   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.932130   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:00.932496   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932672   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932822   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:00.932990   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:00.933401   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:00.933420   47365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-463614 && echo "default-k8s-diff-port-463614" | sudo tee /etc/hostname
	I1205 20:52:01.078295   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-463614
	
	I1205 20:52:01.078332   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.081604   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082051   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.082079   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082240   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.082492   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.083034   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.083506   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.083535   47365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-463614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-463614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-463614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:01.215856   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:01.215884   47365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:01.215912   47365 buildroot.go:174] setting up certificates
	I1205 20:52:01.215927   47365 provision.go:83] configureAuth start
	I1205 20:52:01.215947   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:01.216246   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:01.219169   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219465   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.219503   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.221768   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222137   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.222171   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222410   47365 provision.go:138] copyHostCerts
	I1205 20:52:01.222493   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:01.222508   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:01.222568   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:01.222686   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:01.222717   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:01.222757   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:01.222825   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:01.222832   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:01.222856   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:01.222921   47365 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-463614 san=[192.168.39.27 192.168.39.27 localhost 127.0.0.1 minikube default-k8s-diff-port-463614]
	I1205 20:52:02.247282   46374 start.go:369] acquired machines lock for "embed-certs-331495" in 54.15977635s
	I1205 20:52:02.247348   46374 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:52:02.247360   46374 fix.go:54] fixHost starting: 
	I1205 20:52:02.247794   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:02.247830   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:02.265529   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I1205 20:52:02.265970   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:02.266457   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:52:02.266484   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:02.266825   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:02.267016   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:02.267185   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:52:02.268838   46374 fix.go:102] recreateIfNeeded on embed-certs-331495: state=Stopped err=<nil>
	I1205 20:52:02.268859   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	W1205 20:52:02.269010   46374 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:52:02.270658   46374 out.go:177] * Restarting existing kvm2 VM for "embed-certs-331495" ...
	I1205 20:52:00.114757   46700 retry.go:31] will retry after 2.136164682s: kubelet not initialised
	I1205 20:52:02.258242   46700 retry.go:31] will retry after 4.673214987s: kubelet not initialised
	I1205 20:52:01.474739   47365 provision.go:172] copyRemoteCerts
	I1205 20:52:01.474804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:01.474834   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.477249   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477632   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.477659   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477908   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.478119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.478313   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.478463   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:01.569617   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:01.594120   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 20:52:01.618066   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:52:01.643143   47365 provision.go:86] duration metric: configureAuth took 427.201784ms
	I1205 20:52:01.643169   47365 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:01.643353   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:01.643435   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.646320   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.646821   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.646881   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.647001   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.647206   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647407   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647555   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.647721   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.648105   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.648135   47365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:01.996428   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:01.996456   47365 machine.go:91] provisioned docker machine in 1.067968652s
	I1205 20:52:01.996468   47365 start.go:300] post-start starting for "default-k8s-diff-port-463614" (driver="kvm2")
	I1205 20:52:01.996482   47365 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:01.996502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:01.996804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:01.996829   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.999880   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000345   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.000378   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.000733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.000872   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.001041   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.088194   47365 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:02.092422   47365 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:02.092447   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:02.092522   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:02.092607   47365 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:02.092692   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:02.100847   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:02.125282   47365 start.go:303] post-start completed in 128.798422ms
	I1205 20:52:02.125308   47365 fix.go:56] fixHost completed within 20.234129302s
	I1205 20:52:02.125334   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.128159   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128506   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.128539   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.128970   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129157   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129330   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.129505   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:02.129980   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:02.130001   47365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:02.247134   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809522.185244520
	
	I1205 20:52:02.247160   47365 fix.go:206] guest clock: 1701809522.185244520
	I1205 20:52:02.247170   47365 fix.go:219] Guest: 2023-12-05 20:52:02.18524452 +0000 UTC Remote: 2023-12-05 20:52:02.125313647 +0000 UTC m=+165.907305797 (delta=59.930873ms)
	I1205 20:52:02.247193   47365 fix.go:190] guest clock delta is within tolerance: 59.930873ms
	I1205 20:52:02.247199   47365 start.go:83] releasing machines lock for "default-k8s-diff-port-463614", held for 20.356057608s
	I1205 20:52:02.247233   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.247561   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:02.250476   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.250918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.250952   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.251123   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.251833   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252026   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252117   47365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:02.252168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.252434   47365 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:02.252461   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.255221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255382   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.255750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.255949   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.256004   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.256060   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256278   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.256288   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256453   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256447   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.256586   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256698   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.343546   47365 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:02.368171   47365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:02.518472   47365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:02.524733   47365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:02.524808   47365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:02.541607   47365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:02.541632   47365 start.go:475] detecting cgroup driver to use...
	I1205 20:52:02.541703   47365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:02.560122   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:02.575179   47365 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:02.575244   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:02.591489   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:02.606022   47365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:02.711424   47365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:02.828436   47365 docker.go:219] disabling docker service ...
	I1205 20:52:02.828515   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:02.844209   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:02.860693   47365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:02.979799   47365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:03.111682   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:03.128706   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:03.147984   47365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:03.148057   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.160998   47365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:03.161068   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.173347   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.185126   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.195772   47365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:03.206308   47365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:03.215053   47365 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:03.215103   47365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:03.227755   47365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:03.237219   47365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:03.369712   47365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:03.561508   47365 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:03.561575   47365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:03.569369   47365 start.go:543] Will wait 60s for crictl version
	I1205 20:52:03.569437   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:52:03.575388   47365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:03.618355   47365 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:03.618458   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.670174   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.716011   47365 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:02.272006   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Start
	I1205 20:52:02.272171   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring networks are active...
	I1205 20:52:02.272890   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network default is active
	I1205 20:52:02.273264   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network mk-embed-certs-331495 is active
	I1205 20:52:02.273634   46374 main.go:141] libmachine: (embed-certs-331495) Getting domain xml...
	I1205 20:52:02.274223   46374 main.go:141] libmachine: (embed-certs-331495) Creating domain...
	I1205 20:52:03.644135   46374 main.go:141] libmachine: (embed-certs-331495) Waiting to get IP...
	I1205 20:52:03.645065   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.645451   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.645561   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.645439   48036 retry.go:31] will retry after 246.973389ms: waiting for machine to come up
	I1205 20:52:03.894137   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.894708   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.894813   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.894768   48036 retry.go:31] will retry after 353.753964ms: waiting for machine to come up
	I1205 20:52:04.250496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.251201   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.251231   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.251151   48036 retry.go:31] will retry after 370.705045ms: waiting for machine to come up
	I1205 20:52:04.623959   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.624532   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.624563   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.624488   48036 retry.go:31] will retry after 409.148704ms: waiting for machine to come up
	I1205 20:52:05.035991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.036492   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.036521   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.036458   48036 retry.go:31] will retry after 585.089935ms: waiting for machine to come up
	I1205 20:52:01.272757   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.865397348s)
	I1205 20:52:01.272791   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1205 20:52:01.272823   46866 cache_images.go:123] Successfully loaded all cached images
	I1205 20:52:01.272830   46866 cache_images.go:92] LoadImages completed in 17.860858219s
	I1205 20:52:01.272913   46866 ssh_runner.go:195] Run: crio config
	I1205 20:52:01.346651   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:01.346671   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:01.346689   46866 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:01.346715   46866 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.162 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143651 NodeName:no-preload-143651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:01.346890   46866 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143651"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:01.347005   46866 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:01.347080   46866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1205 20:52:01.360759   46866 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:01.360818   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:01.372537   46866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1205 20:52:01.389057   46866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1205 20:52:01.405689   46866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1205 20:52:01.426066   46866 ssh_runner.go:195] Run: grep 192.168.61.162	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:01.430363   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:01.443015   46866 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651 for IP: 192.168.61.162
	I1205 20:52:01.443049   46866 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:01.443202   46866 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:01.443254   46866 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:01.443337   46866 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.key
	I1205 20:52:01.443423   46866 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key.5bf94fca
	I1205 20:52:01.443477   46866 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key
	I1205 20:52:01.443626   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:01.443664   46866 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:01.443689   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:01.443729   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:01.443768   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:01.443800   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:01.443868   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:01.444505   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:01.471368   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:01.495925   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:01.520040   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:01.542515   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:01.565061   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:01.592011   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:01.615244   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:01.640425   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:01.666161   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:01.688991   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:01.711978   46866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:01.728642   46866 ssh_runner.go:195] Run: openssl version
	I1205 20:52:01.734248   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:01.746741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751589   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751647   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.757299   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:01.768280   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:01.779234   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783897   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783961   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.789668   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:01.800797   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:01.814741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819713   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819774   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.825538   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:01.836443   46866 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:01.842191   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:01.850025   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:01.857120   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:01.863507   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:01.870887   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:01.878657   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:01.886121   46866 kubeadm.go:404] StartCluster: {Name:no-preload-143651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:01.886245   46866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:01.886311   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:01.933026   46866 cri.go:89] found id: ""
	I1205 20:52:01.933096   46866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:01.946862   46866 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:01.946891   46866 kubeadm.go:636] restartCluster start
	I1205 20:52:01.946950   46866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:01.959468   46866 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.960467   46866 kubeconfig.go:92] found "no-preload-143651" server: "https://192.168.61.162:8443"
	I1205 20:52:01.962804   46866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:01.975351   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.975427   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:01.988408   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.988439   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.988493   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.001669   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:02.502716   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:02.502781   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.515220   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.002777   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.002843   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.016667   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.501748   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.501840   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.515761   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.001797   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.001873   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.018140   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.502697   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.502791   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.518059   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.002414   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.002515   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.021107   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.502637   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.502733   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.521380   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.717595   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:03.720774   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721210   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:03.721242   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721414   47365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:03.726330   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:03.738414   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:03.738479   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:03.777318   47365 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:03.777380   47365 ssh_runner.go:195] Run: which lz4
	I1205 20:52:03.781463   47365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:03.785728   47365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:03.785759   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:05.712791   47365 crio.go:444] Took 1.931355 seconds to copy over tarball
	I1205 20:52:05.712888   47365 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:06.939842   46700 retry.go:31] will retry after 8.345823287s: kubelet not initialised
	I1205 20:52:05.623348   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.623894   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.623928   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.623844   48036 retry.go:31] will retry after 819.796622ms: waiting for machine to come up
	I1205 20:52:06.445034   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:06.445471   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:06.445504   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:06.445427   48036 retry.go:31] will retry after 716.017152ms: waiting for machine to come up
	I1205 20:52:07.162965   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:07.163496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:07.163526   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:07.163445   48036 retry.go:31] will retry after 1.085415508s: waiting for machine to come up
	I1205 20:52:08.250373   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:08.250962   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:08.250999   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:08.250909   48036 retry.go:31] will retry after 1.128069986s: waiting for machine to come up
	I1205 20:52:09.380537   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:09.381001   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:09.381027   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:09.380964   48036 retry.go:31] will retry after 1.475239998s: waiting for machine to come up
	I1205 20:52:06.002168   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.002247   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.025123   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:06.502715   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.502831   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.519395   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.001937   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.002068   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.019028   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.501962   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.502059   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.515098   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.002769   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.002909   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.020137   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.501807   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.501949   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.518082   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.002421   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.002505   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.016089   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.502171   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.502261   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.515449   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.001975   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.002117   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.013831   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.502398   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.502481   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.514939   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.946250   47365 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.233316669s)
	I1205 20:52:08.946291   47365 crio.go:451] Took 3.233468 seconds to extract the tarball
	I1205 20:52:08.946304   47365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:08.988526   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:09.041782   47365 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:09.041812   47365 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:09.041908   47365 ssh_runner.go:195] Run: crio config
	I1205 20:52:09.105852   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:09.105879   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:09.105901   47365 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:09.105926   47365 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-463614 NodeName:default-k8s-diff-port-463614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:09.106114   47365 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-463614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:09.106218   47365 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-463614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1205 20:52:09.106295   47365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:09.116476   47365 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:09.116569   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:09.125304   47365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1205 20:52:09.141963   47365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:09.158882   47365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1205 20:52:09.177829   47365 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:09.181803   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:09.194791   47365 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614 for IP: 192.168.39.27
	I1205 20:52:09.194824   47365 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:09.194968   47365 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:09.195028   47365 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:09.195135   47365 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.key
	I1205 20:52:09.195225   47365 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key.310d49ea
	I1205 20:52:09.195287   47365 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key
	I1205 20:52:09.195457   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:09.195502   47365 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:09.195519   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:09.195561   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:09.195594   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:09.195625   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:09.195698   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:09.196495   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:09.221945   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:09.249557   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:09.279843   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:09.309602   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:09.338163   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:09.365034   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:09.394774   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:09.420786   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:09.445787   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:09.474838   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:09.499751   47365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:09.523805   47365 ssh_runner.go:195] Run: openssl version
	I1205 20:52:09.530143   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:09.545184   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550681   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550751   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.558670   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:09.573789   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:09.585134   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591055   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591136   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.597286   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:09.608901   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:09.620949   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626190   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626267   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.632394   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:09.645362   47365 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:09.650768   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:09.657084   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:09.663183   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:09.669093   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:09.675365   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:09.681992   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:09.688849   47365 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:09.688963   47365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:09.689035   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:09.730999   47365 cri.go:89] found id: ""
	I1205 20:52:09.731061   47365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:09.741609   47365 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:09.741640   47365 kubeadm.go:636] restartCluster start
	I1205 20:52:09.741700   47365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:09.751658   47365 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.752671   47365 kubeconfig.go:92] found "default-k8s-diff-port-463614" server: "https://192.168.39.27:8444"
	I1205 20:52:09.755361   47365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:09.765922   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.766006   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.781956   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.781983   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.782033   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.795265   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.295986   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.296088   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.312309   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.795832   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.795959   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.808880   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.857552   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:10.857968   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:10.858002   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:10.857911   48036 retry.go:31] will retry after 1.882319488s: waiting for machine to come up
	I1205 20:52:12.741608   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:12.742051   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:12.742081   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:12.742006   48036 retry.go:31] will retry after 2.598691975s: waiting for machine to come up
	I1205 20:52:15.343818   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:15.344360   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:15.344385   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:15.344306   48036 retry.go:31] will retry after 3.313897625s: waiting for machine to come up
	I1205 20:52:11.002661   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.002740   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.014931   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.502548   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.502621   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.516090   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.975668   46866 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:11.975724   46866 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:11.975739   46866 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:11.975820   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:12.032265   46866 cri.go:89] found id: ""
	I1205 20:52:12.032364   46866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:12.050705   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:12.060629   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:12.060726   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.073988   46866 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.074015   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:12.209842   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.318235   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108353469s)
	I1205 20:52:13.318280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.518224   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.606064   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.695764   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:13.695849   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:13.718394   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.237554   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.737066   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:15.236911   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:11.295662   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.295754   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.308889   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.796322   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.796432   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.812351   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.295433   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.295527   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.308482   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.795889   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.795961   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.812458   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.296017   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.296114   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.312758   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.796111   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.796256   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.812247   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.295726   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.295808   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.308712   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.796358   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.796439   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.813173   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.295541   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.295632   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.312665   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.796231   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.796378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.816767   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.292395   46700 retry.go:31] will retry after 12.309806949s: kubelet not initialised
	I1205 20:52:18.659431   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:18.659915   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:18.659944   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:18.659867   48036 retry.go:31] will retry after 3.672641091s: waiting for machine to come up
	I1205 20:52:15.737064   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.237656   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.263010   46866 api_server.go:72] duration metric: took 2.567245952s to wait for apiserver process to appear ...
	I1205 20:52:16.263039   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:16.263057   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.286115   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.286153   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.286173   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.334683   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.334710   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.835110   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.840833   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:19.840866   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.335444   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.355923   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:20.355956   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.835568   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.840974   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:52:20.849239   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:52:20.849274   46866 api_server.go:131] duration metric: took 4.586226618s to wait for apiserver health ...
	I1205 20:52:20.849284   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:20.849323   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:20.850829   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:16.295650   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.295729   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.312742   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:16.796283   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.796364   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.812822   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.295879   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.295953   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.312254   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.795437   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.795519   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.808598   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.296187   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.296266   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.312808   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.796368   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.796480   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.812986   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.295511   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:19.295576   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:19.308830   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.766569   47365 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:19.766653   47365 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:19.766673   47365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:19.766748   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:19.820510   47365 cri.go:89] found id: ""
	I1205 20:52:19.820590   47365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:19.842229   47365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:19.853234   47365 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:19.853293   47365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866181   47365 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866220   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:20.022098   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.165439   47365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143295704s)
	I1205 20:52:21.165472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:22.333575   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334146   46374 main.go:141] libmachine: (embed-certs-331495) Found IP for machine: 192.168.72.180
	I1205 20:52:22.334189   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has current primary IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334205   46374 main.go:141] libmachine: (embed-certs-331495) Reserving static IP address...
	I1205 20:52:22.334654   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.334686   46374 main.go:141] libmachine: (embed-certs-331495) DBG | skip adding static IP to network mk-embed-certs-331495 - found existing host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"}
	I1205 20:52:22.334699   46374 main.go:141] libmachine: (embed-certs-331495) Reserved static IP address: 192.168.72.180
	I1205 20:52:22.334717   46374 main.go:141] libmachine: (embed-certs-331495) Waiting for SSH to be available...
	I1205 20:52:22.334727   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Getting to WaitForSSH function...
	I1205 20:52:22.337411   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337832   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.337863   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337976   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH client type: external
	I1205 20:52:22.338005   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa (-rw-------)
	I1205 20:52:22.338038   46374 main.go:141] libmachine: (embed-certs-331495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:22.338057   46374 main.go:141] libmachine: (embed-certs-331495) DBG | About to run SSH command:
	I1205 20:52:22.338071   46374 main.go:141] libmachine: (embed-certs-331495) DBG | exit 0
	I1205 20:52:22.430984   46374 main.go:141] libmachine: (embed-certs-331495) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:22.431374   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetConfigRaw
	I1205 20:52:22.432120   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.435317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.435737   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.435772   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.436044   46374 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/config.json ...
	I1205 20:52:22.436283   46374 machine.go:88] provisioning docker machine ...
	I1205 20:52:22.436304   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:22.436519   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436687   46374 buildroot.go:166] provisioning hostname "embed-certs-331495"
	I1205 20:52:22.436707   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436882   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.439595   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.439966   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.439998   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.440179   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.440392   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440558   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440718   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.440891   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.441216   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.441235   46374 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-331495 && echo "embed-certs-331495" | sudo tee /etc/hostname
	I1205 20:52:22.584600   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-331495
	
	I1205 20:52:22.584662   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.587640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588053   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.588083   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588255   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.588469   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.588985   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.589340   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.589369   46374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-331495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-331495/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-331495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:22.722352   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:22.722390   46374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:22.722437   46374 buildroot.go:174] setting up certificates
	I1205 20:52:22.722459   46374 provision.go:83] configureAuth start
	I1205 20:52:22.722475   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.722776   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.725826   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726254   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.726313   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726616   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.729267   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729606   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.729640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729798   46374 provision.go:138] copyHostCerts
	I1205 20:52:22.729843   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:22.729853   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:22.729907   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:22.729986   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:22.729994   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:22.730019   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:22.730090   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:22.730100   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:22.730128   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:22.730188   46374 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.embed-certs-331495 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-331495]
	I1205 20:52:22.795361   46374 provision.go:172] copyRemoteCerts
	I1205 20:52:22.795435   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:22.795464   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.798629   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799006   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.799052   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799222   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.799448   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.799617   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.799774   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:22.892255   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:52:22.929940   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:52:22.966087   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:22.998887   46374 provision.go:86] duration metric: configureAuth took 276.409362ms
	I1205 20:52:22.998937   46374 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:22.999160   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:22.999253   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.002604   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.002992   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.003033   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.003265   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.003516   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003723   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003916   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.004090   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.004540   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.004568   46374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:23.371418   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:23.371450   46374 machine.go:91] provisioned docker machine in 935.149228ms
	I1205 20:52:23.371464   46374 start.go:300] post-start starting for "embed-certs-331495" (driver="kvm2")
	I1205 20:52:23.371477   46374 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:23.371500   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.371872   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:23.371911   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.375440   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.375960   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.375991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.376130   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.376328   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.376512   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.376693   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.472304   46374 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:23.477044   46374 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:23.477070   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:23.477177   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:23.477287   46374 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:23.477425   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:23.493987   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:23.519048   46374 start.go:303] post-start completed in 147.566985ms
	I1205 20:52:23.519082   46374 fix.go:56] fixHost completed within 21.27172194s
	I1205 20:52:23.519107   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.522260   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522700   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.522735   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522967   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.523238   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523456   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.523893   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.524220   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.524239   46374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:23.648717   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809543.591713401
	
	I1205 20:52:23.648743   46374 fix.go:206] guest clock: 1701809543.591713401
	I1205 20:52:23.648755   46374 fix.go:219] Guest: 2023-12-05 20:52:23.591713401 +0000 UTC Remote: 2023-12-05 20:52:23.519087629 +0000 UTC m=+358.020977056 (delta=72.625772ms)
	I1205 20:52:23.648800   46374 fix.go:190] guest clock delta is within tolerance: 72.625772ms
	I1205 20:52:23.648808   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 21.401495157s
	I1205 20:52:23.648838   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.649149   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:23.652098   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652534   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.652577   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652773   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653350   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653552   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653655   46374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:23.653709   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.653948   46374 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:23.653989   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.657266   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657547   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657637   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657669   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657946   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657957   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.657970   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.658236   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.658250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658438   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658532   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658756   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.658785   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658933   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.777965   46374 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:23.784199   46374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:23.948621   46374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:23.957081   46374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:23.957163   46374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:23.978991   46374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:23.979023   46374 start.go:475] detecting cgroup driver to use...
	I1205 20:52:23.979124   46374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:23.997195   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:24.015420   46374 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:24.015494   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:24.031407   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:24.047587   46374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:24.200996   46374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:24.332015   46374 docker.go:219] disabling docker service ...
	I1205 20:52:24.332095   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:24.350586   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:24.367457   46374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:24.545467   46374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:24.733692   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:24.748391   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:24.768555   46374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:24.768644   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.780668   46374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:24.780740   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.792671   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.806500   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.818442   46374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:24.829822   46374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:24.842070   46374 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:24.842138   46374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:24.857370   46374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:24.867993   46374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:25.024629   46374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:25.231556   46374 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:25.231630   46374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:25.237863   46374 start.go:543] Will wait 60s for crictl version
	I1205 20:52:25.237929   46374 ssh_runner.go:195] Run: which crictl
	I1205 20:52:25.242501   46374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:25.289507   46374 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:25.289591   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.340432   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.398354   46374 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:25.399701   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:25.402614   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.402997   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:25.403029   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.403259   46374 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:25.407873   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:25.420725   46374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:25.420801   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:25.468651   46374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:25.468726   46374 ssh_runner.go:195] Run: which lz4
	I1205 20:52:25.473976   46374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:25.478835   46374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:25.478871   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:20.852220   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:20.867614   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:20.892008   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:20.912985   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:20.913027   46866 system_pods.go:61] "coredns-76f75df574-8d24t" [10265d3b-ddf0-4559-8194-d42563df88a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:20.913038   46866 system_pods.go:61] "etcd-no-preload-143651" [a6b62f23-a944-41ec-b465-6027fcf1f413] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:20.913051   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [5a6b5874-6c6b-4ed6-aa68-8e7fc35a486e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:20.913061   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [42b01d8c-2d8f-467e-8183-eef2e6f73b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:20.913074   46866 system_pods.go:61] "kube-proxy-mltvl" [9adea5d0-e824-40ff-b5b4-16f84fd439ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:20.913085   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [17474fca-8390-48db-bebe-47c1e2cf7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:20.913107   46866 system_pods.go:61] "metrics-server-57f55c9bc5-mhxpn" [3eb25a58-bea3-4266-9bf8-8f186ee65e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:20.913120   46866 system_pods.go:61] "storage-provisioner" [cfe9d24c-a534-4778-980b-99f7addcf0b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:20.913132   46866 system_pods.go:74] duration metric: took 21.101691ms to wait for pod list to return data ...
	I1205 20:52:20.913143   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:20.917108   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:20.917140   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:20.917156   46866 node_conditions.go:105] duration metric: took 4.003994ms to run NodePressure ...
	I1205 20:52:20.917180   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.315507   46866 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321271   46866 kubeadm.go:787] kubelet initialised
	I1205 20:52:21.321301   46866 kubeadm.go:788] duration metric: took 5.763416ms waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321310   46866 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:21.327760   46866 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:23.354192   46866 pod_ready.go:102] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:25.353274   46866 pod_ready.go:92] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:25.353356   46866 pod_ready.go:81] duration metric: took 4.02555842s waiting for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:25.353372   46866 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:21.402472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.498902   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.585971   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:21.586073   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:21.605993   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.120378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.620326   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.119466   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.619549   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.120228   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.143130   47365 api_server.go:72] duration metric: took 2.557157382s to wait for apiserver process to appear ...
	I1205 20:52:24.143163   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:24.143182   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:27.608165   46700 retry.go:31] will retry after 7.717398196s: kubelet not initialised
	I1205 20:52:28.335417   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:28.335446   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:28.335457   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.429478   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.429507   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:28.929996   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.936475   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.936525   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.430308   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.437787   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:29.437838   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.930326   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.942625   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:52:29.953842   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:29.953875   47365 api_server.go:131] duration metric: took 5.810704359s to wait for apiserver health ...
	I1205 20:52:29.953889   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:29.953904   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:29.955505   47365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:27.326223   46374 crio.go:444] Took 1.852284 seconds to copy over tarball
	I1205 20:52:27.326333   46374 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:27.374784   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:29.378733   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:30.375181   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:30.375266   46866 pod_ready.go:81] duration metric: took 5.021883955s waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.375316   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:29.956914   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:29.981391   47365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:30.016634   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:30.030957   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:30.031030   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:30.031047   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:30.031069   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:30.031088   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:30.031117   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:30.031135   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:30.031148   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:30.031165   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:30.031177   47365 system_pods.go:74] duration metric: took 14.513879ms to wait for pod list to return data ...
	I1205 20:52:30.031190   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:30.035458   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:30.035493   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:30.035506   47365 node_conditions.go:105] duration metric: took 4.295594ms to run NodePressure ...
	I1205 20:52:30.035525   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:30.302125   47365 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307852   47365 kubeadm.go:787] kubelet initialised
	I1205 20:52:30.307875   47365 kubeadm.go:788] duration metric: took 5.724991ms waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307883   47365 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:30.316621   47365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.323682   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323716   47365 pod_ready.go:81] duration metric: took 7.060042ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.323728   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323736   47365 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.338909   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338945   47365 pod_ready.go:81] duration metric: took 15.198541ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.338967   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338977   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.349461   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349491   47365 pod_ready.go:81] duration metric: took 10.504515ms waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.349505   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349513   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.422520   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422553   47365 pod_ready.go:81] duration metric: took 73.030993ms waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.422569   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422588   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.212527   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212553   47365 pod_ready.go:81] duration metric: took 789.956497ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.212564   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212575   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.727110   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727140   47365 pod_ready.go:81] duration metric: took 514.553589ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.727154   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727162   47365 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.168658   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168695   47365 pod_ready.go:81] duration metric: took 441.52358ms waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:32.168711   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168720   47365 pod_ready.go:38] duration metric: took 1.860826751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:32.168747   47365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:52:32.182053   47365 ops.go:34] apiserver oom_adj: -16
	I1205 20:52:32.182075   47365 kubeadm.go:640] restartCluster took 22.440428452s
	I1205 20:52:32.182083   47365 kubeadm.go:406] StartCluster complete in 22.493245354s
	I1205 20:52:32.182130   47365 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.182208   47365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:52:32.184035   47365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.290773   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:52:32.290931   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:32.290921   47365 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:52:32.291055   47365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291079   47365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291088   47365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291099   47365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-463614"
	I1205 20:52:32.291123   47365 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291133   47365 addons.go:240] addon metrics-server should already be in state true
	I1205 20:52:32.291177   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291093   47365 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291220   47365 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:52:32.291298   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291586   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291607   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291633   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291635   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291713   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291739   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.311298   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I1205 20:52:32.311514   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I1205 20:52:32.311541   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1205 20:52:32.311733   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.311932   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312026   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312291   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312325   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312434   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312456   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312487   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312501   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312688   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312763   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312833   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.313276   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313300   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.313359   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313390   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.316473   47365 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.316493   47365 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:52:32.316520   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.317093   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.317125   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.328598   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I1205 20:52:32.329097   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.329225   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I1205 20:52:32.329589   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.329608   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.329674   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.330230   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.330248   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.330298   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330484   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330553   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330719   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330908   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I1205 20:52:32.331201   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.331935   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.331953   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.332351   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.332472   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.332653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.512055   47365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:52:32.333098   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.511993   47365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:52:32.536814   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:52:32.512201   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.536942   47365 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.536958   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:52:32.536985   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.536843   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:52:32.537043   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.541412   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541780   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541924   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.541958   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542190   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542369   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.542394   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542434   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.542641   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.542748   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542905   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.542939   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.543088   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.543246   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.554014   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I1205 20:52:32.554513   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.554975   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.555007   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.555387   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.555634   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.557606   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.557895   47365 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.557911   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:52:32.557936   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.561075   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.561553   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.561942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.562135   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.562338   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.673513   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.682442   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:52:32.682472   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:52:32.706007   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.726379   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:52:32.726413   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:52:32.779247   47365 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:52:32.780175   47365 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-463614" context rescaled to 1 replicas
	I1205 20:52:32.780220   47365 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:52:32.787518   47365 out.go:177] * Verifying Kubernetes components...
	I1205 20:52:32.790046   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:32.796219   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:32.796248   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:52:32.854438   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:34.594203   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.920648219s)
	I1205 20:52:34.594267   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594294   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.888240954s)
	I1205 20:52:34.594331   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594343   47365 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.80425984s)
	I1205 20:52:34.594373   47365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:34.594350   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594710   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594729   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.594755   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594772   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594783   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594801   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594754   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594860   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.595134   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595195   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595229   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595238   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.595356   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595375   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.610358   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.610390   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.610651   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.610677   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689242   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.834763203s)
	I1205 20:52:34.689294   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689309   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.689648   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.689698   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.689717   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689740   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.690020   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.690025   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.690035   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.690046   47365 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-463614"
	I1205 20:52:34.692072   47365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 20:52:30.639619   46374 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.313251826s)
	I1205 20:52:30.641314   46374 crio.go:451] Took 3.315054 seconds to extract the tarball
	I1205 20:52:30.641328   46374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:30.687076   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:30.745580   46374 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:30.745603   46374 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:30.745681   46374 ssh_runner.go:195] Run: crio config
	I1205 20:52:30.807631   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:30.807656   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:30.807674   46374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:30.807692   46374 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-331495 NodeName:embed-certs-331495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:30.807828   46374 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-331495"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:30.807897   46374 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-331495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:30.807958   46374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:30.820571   46374 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:30.820679   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:30.831881   46374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:52:30.852058   46374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:30.870516   46374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1205 20:52:30.888000   46374 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:30.892529   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:30.904910   46374 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495 for IP: 192.168.72.180
	I1205 20:52:30.904950   46374 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:30.905143   46374 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:30.905197   46374 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:30.905280   46374 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/client.key
	I1205 20:52:30.905336   46374 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key.379caec1
	I1205 20:52:30.905368   46374 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key
	I1205 20:52:30.905463   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:30.905489   46374 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:30.905499   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:30.905525   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:30.905550   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:30.905572   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:30.905609   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:30.906129   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:30.930322   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:30.953120   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:30.976792   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:31.000462   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:31.025329   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:31.050451   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:31.075644   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:31.101693   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:31.125712   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:31.149721   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:31.173466   46374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:31.191836   46374 ssh_runner.go:195] Run: openssl version
	I1205 20:52:31.197909   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:31.212206   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219081   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219155   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.225423   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:31.239490   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:31.251505   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256613   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256678   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.262730   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:31.274879   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:31.286201   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291593   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291658   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.298904   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:31.310560   46374 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:31.315670   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:31.322461   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:31.328590   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:31.334580   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:31.341827   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:31.348456   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:31.354835   46374 kubeadm.go:404] StartCluster: {Name:embed-certs-331495 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:31.354945   46374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:31.355024   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:31.396272   46374 cri.go:89] found id: ""
	I1205 20:52:31.396346   46374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:31.406603   46374 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:31.406629   46374 kubeadm.go:636] restartCluster start
	I1205 20:52:31.406683   46374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:31.417671   46374 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.419068   46374 kubeconfig.go:92] found "embed-certs-331495" server: "https://192.168.72.180:8443"
	I1205 20:52:31.421304   46374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:31.432188   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.432260   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.445105   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.445132   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.445182   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.457857   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.958205   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.958322   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.972477   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.458645   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.458732   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.475471   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.958778   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.958872   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.973340   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.458838   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.458924   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.475090   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.958680   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.958776   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.974789   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.458297   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.458371   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.471437   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.958961   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.959030   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.972007   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:35.458648   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.458729   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.471573   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.362684   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.362706   46866 pod_ready.go:81] duration metric: took 1.98737949s waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.362715   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368694   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.368717   46866 pod_ready.go:81] duration metric: took 5.993796ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368726   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375418   46866 pod_ready.go:92] pod "kube-proxy-mltvl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.375442   46866 pod_ready.go:81] duration metric: took 6.709035ms waiting for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375452   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383393   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.383418   46866 pod_ready.go:81] duration metric: took 7.957397ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383430   46866 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:34.497914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:34.693693   47365 addons.go:502] enable addons completed in 2.40279745s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 20:52:35.331317   46700 retry.go:31] will retry after 13.122920853s: kubelet not initialised
	I1205 20:52:35.958930   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.959020   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.971607   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.458135   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.458202   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.475097   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.958621   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.958703   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.974599   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.458670   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.458790   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.472296   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.958470   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.958561   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.971241   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.458862   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.458957   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.471475   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.958727   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.958807   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.971366   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.458991   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.459084   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.471352   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.958955   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.959052   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.972803   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:40.458181   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.458251   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.470708   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.499335   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:38.996779   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:36.611450   47365 node_ready.go:58] node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:39.111234   47365 node_ready.go:49] node "default-k8s-diff-port-463614" has status "Ready":"True"
	I1205 20:52:39.111266   47365 node_ready.go:38] duration metric: took 4.51686489s waiting for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:39.111278   47365 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:39.117815   47365 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124431   47365 pod_ready.go:92] pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.124455   47365 pod_ready.go:81] duration metric: took 6.615213ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124464   47365 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131301   47365 pod_ready.go:92] pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.131340   47365 pod_ready.go:81] duration metric: took 6.85604ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131352   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:41.155265   47365 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:40.958830   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.958921   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.970510   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:41.432806   46374 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:41.432840   46374 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:41.432854   46374 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:41.432909   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:41.476486   46374 cri.go:89] found id: ""
	I1205 20:52:41.476550   46374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:41.493676   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:41.503594   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:41.503681   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512522   46374 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512550   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:41.645081   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.368430   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.586289   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.657555   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.753020   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:42.753103   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:42.767926   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.286111   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.786148   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.285601   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.785638   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.285508   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.326812   46374 api_server.go:72] duration metric: took 2.573794156s to wait for apiserver process to appear ...
	I1205 20:52:45.326839   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:45.326857   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327337   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:45.327367   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327771   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:40.998702   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:43.508882   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:42.152898   47365 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:42.152926   47365 pod_ready.go:81] duration metric: took 3.021552509s waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:42.152939   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320531   47365 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.320632   47365 pod_ready.go:81] duration metric: took 1.167680941s waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320660   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521255   47365 pod_ready.go:92] pod "kube-proxy-g4zct" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.521286   47365 pod_ready.go:81] duration metric: took 200.606753ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521300   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911946   47365 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.911972   47365 pod_ready.go:81] duration metric: took 390.664131ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911983   47365 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:46.220630   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.459426   46700 kubeadm.go:787] kubelet initialised
	I1205 20:52:48.459452   46700 kubeadm.go:788] duration metric: took 53.977281861s waiting for restarted kubelet to initialise ...
	I1205 20:52:48.459460   46700 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:48.465332   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471155   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.471184   46700 pod_ready.go:81] duration metric: took 5.815983ms waiting for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471195   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476833   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.476861   46700 pod_ready.go:81] duration metric: took 5.658311ms waiting for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476876   46700 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481189   46700 pod_ready.go:92] pod "etcd-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.481217   46700 pod_ready.go:81] duration metric: took 4.332284ms waiting for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481230   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485852   46700 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.485869   46700 pod_ready.go:81] duration metric: took 4.630813ms waiting for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485879   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:45.828213   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.185115   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.185143   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.185156   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.228977   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.229017   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.328278   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.336930   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.336971   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:49.828530   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.835188   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.835215   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:50.328834   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.337852   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:50.337885   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:45.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:47.998466   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.497317   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.828313   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.835050   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:52:50.844093   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:50.844124   46374 api_server.go:131] duration metric: took 5.517278039s to wait for apiserver health ...
	I1205 20:52:50.844134   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:50.844141   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:50.846047   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:48.220942   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.720446   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.858954   46700 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.858980   46700 pod_ready.go:81] duration metric: took 373.093905ms waiting for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.858989   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260468   46700 pod_ready.go:92] pod "kube-proxy-r5n6g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.260493   46700 pod_ready.go:81] duration metric: took 401.497792ms waiting for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260501   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658952   46700 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.658977   46700 pod_ready.go:81] duration metric: took 398.469864ms waiting for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658986   46700 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:51.966947   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.848285   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:50.865469   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:50.918755   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:50.951671   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:50.951705   46374 system_pods.go:61] "coredns-5dd5756b68-7xr6w" [8300dbf8-413a-4171-9e56-53f0f2d03fd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:50.951712   46374 system_pods.go:61] "etcd-embed-certs-331495" [b2802bcb-262e-4d2a-9589-b1b3885de515] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:50.951722   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [6f9a28a7-8827-4071-8c68-f2671e7a8017] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:50.951738   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [24e85887-7f58-4a5c-b0d4-4eebd6076a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:50.951744   46374 system_pods.go:61] "kube-proxy-76qq2" [ffd744ec-9522-443c-b609-b11e24ab9b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:50.951750   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [aaa502dc-a7cf-4f76-b79f-aa8be1ae48f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:50.951756   46374 system_pods.go:61] "metrics-server-57f55c9bc5-bcg28" [e60503c2-732d-44a3-b5da-fbf7a0cfd981] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:50.951761   46374 system_pods.go:61] "storage-provisioner" [be1aa61b-82e9-4382-ab1c-89e30b801fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:50.951767   46374 system_pods.go:74] duration metric: took 32.973877ms to wait for pod list to return data ...
	I1205 20:52:50.951773   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:50.971413   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:50.971440   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:50.971449   46374 node_conditions.go:105] duration metric: took 19.672668ms to run NodePressure ...
	I1205 20:52:50.971465   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:51.378211   46374 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383462   46374 kubeadm.go:787] kubelet initialised
	I1205 20:52:51.383487   46374 kubeadm.go:788] duration metric: took 5.246601ms waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383495   46374 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:51.393558   46374 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:53.414801   46374 pod_ready.go:102] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.426681   46374 pod_ready.go:92] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:55.426710   46374 pod_ready.go:81] duration metric: took 4.033124274s waiting for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:55.426725   46374 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:52.498509   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.997539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:53.221825   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.723682   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.468896   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:56.966471   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.468158   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.469797   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.497582   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.500937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.727756   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.727968   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.466541   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469387   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469996   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.968435   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.969033   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.969065   46374 pod_ready.go:81] duration metric: took 9.542324599s waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.969073   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975019   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.975041   46374 pod_ready.go:81] duration metric: took 5.961268ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975049   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980743   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.980771   46374 pod_ready.go:81] duration metric: took 5.713974ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980779   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985565   46374 pod_ready.go:92] pod "kube-proxy-76qq2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.985596   46374 pod_ready.go:81] duration metric: took 4.805427ms waiting for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985610   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992009   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.992035   46374 pod_ready.go:81] duration metric: took 6.416324ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992047   46374 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:01.996877   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.997311   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:02.221319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.720314   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.966830   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.465943   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:07.272848   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.272897   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:05.997810   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.497408   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.722608   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.222226   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.965894   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.967253   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.466458   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.773608   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.773778   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.997547   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:12.999476   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.496736   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.721128   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.721371   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.221780   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.466602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.965160   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.272951   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.772527   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.497284   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.498006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.223073   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.724402   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.966424   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.466866   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.772789   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.273369   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:21.997270   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.496150   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:23.221999   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.223587   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.967755   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.465568   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.772596   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:30.273464   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:26.496470   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.003099   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.721654   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.724134   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.466332   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.966465   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.773521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:35.272236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.497006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.000663   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.221725   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.719806   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.466035   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.966501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:37.773436   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.274255   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.496949   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.996265   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.721339   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.723854   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.221087   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:39.465585   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.465785   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.467239   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:42.773263   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:44.773717   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.998588   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.496904   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.497783   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.222148   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.722122   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.966317   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.966572   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.272412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:49.273057   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.997444   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.496708   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.722350   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.219843   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.467523   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.967357   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:51.773424   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:53.775574   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.499839   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.997448   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.222442   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.719693   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:55.466751   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:57.966602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.271805   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.272923   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.273306   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.998244   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:59.498440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.720684   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.729688   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.220861   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.466162   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.966846   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.773903   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.271747   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.995748   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:04.002522   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:03.723212   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.224289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.465907   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.466264   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.272960   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.274281   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.497442   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.997440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.721146   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:10.724743   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.966368   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.966796   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.772305   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.772470   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.496229   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.497913   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.221912   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.722076   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:14.467708   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:16.965932   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.773481   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:17.774552   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.273733   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.998027   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.496453   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.497053   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.223289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.722234   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.966869   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:21.465921   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:23.466328   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.272550   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.497084   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:24.498177   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.727882   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.221485   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.966388   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.466553   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.772616   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.773188   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:26.997209   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.997776   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.721711   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.722528   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:30.964854   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.966383   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.272612   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.275600   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:31.498601   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:33.997450   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.220641   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.222232   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.476491   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.968512   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.772248   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.272991   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.997574   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.999016   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.501116   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.723179   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.220182   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.469607   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.968860   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.274044   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.502208   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:44.997516   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.720811   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.721757   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.725689   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.466766   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.966704   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.773511   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.273161   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.274031   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.497342   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:49.502501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.223549   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.719890   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.465849   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.466157   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.772748   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:55.272781   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:51.997636   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.499333   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.720512   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.721826   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.466519   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.466580   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.274370   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.774179   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.997654   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.497915   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.221713   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.723015   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:58.965289   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:00.966027   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.967557   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.273349   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.773101   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.996491   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:03.996649   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.723123   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.220986   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.224736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.466592   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.966611   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.773180   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.774008   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.997589   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.998076   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.001226   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.720517   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.221172   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.466096   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.467200   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.272981   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.773210   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.496043   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.497518   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.725751   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.219939   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.966795   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:17.466501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.272578   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.273500   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.997861   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.499434   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.221058   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.720978   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.466641   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.965389   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.772109   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.274633   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.997800   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:24.497501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.220292   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.722738   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.966366   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.966799   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.465341   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.773108   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:27.774236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.274971   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:26.997610   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.997753   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.220185   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.220399   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.466026   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.966220   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.772859   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:35.272898   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:31.497899   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:33.500772   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.220696   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.221098   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.222701   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.966787   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.465676   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.775190   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.272006   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.000539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.497044   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.720509   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.730400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:39.468063   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:41.966415   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:42.276412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.772916   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.996937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.496928   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.220575   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.724283   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.465646   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.467000   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.773090   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.273675   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.997477   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:47.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.998126   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.220758   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:50.720911   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.966711   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.468554   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.773277   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:52.501489   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:54.996998   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.221047   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.221493   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.965841   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.965891   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.465977   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.272446   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.772269   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.997565   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.496443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:57.722571   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.724736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.466069   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.966747   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.772715   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.271368   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.274084   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:01.498102   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.498428   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.220645   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.720012   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.966850   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.467719   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.772997   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.273279   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.998642   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:08.001018   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.496939   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:06.721938   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.219709   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:11.220579   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.968249   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.465039   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.773538   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.272696   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.500855   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.996837   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:13.725252   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.725522   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.465989   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:16.966908   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.273749   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.772650   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.496107   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.496914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:18.224365   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:20.720429   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.465513   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.967092   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.775353   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:24.277586   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.498047   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.999733   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.219319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:25.222340   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.967374   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.465973   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.468481   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.772514   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.774642   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.496794   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.498446   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:27.723499   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.222748   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.965650   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.967183   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.777450   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:33.276381   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.999443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.384081   46866 pod_ready.go:81] duration metric: took 4m0.000635015s waiting for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:32.384115   46866 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:32.384132   46866 pod_ready.go:38] duration metric: took 4m11.062812404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:32.384156   46866 kubeadm.go:640] restartCluster took 4m30.437260197s
	W1205 20:56:32.384250   46866 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:32.384280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:32.721610   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.220186   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.467452   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.966451   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.773516   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.773737   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.273185   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.221794   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:39.722400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.466005   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.467531   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.773790   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:45.272396   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:41.722481   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.734080   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.912982   47365 pod_ready.go:81] duration metric: took 4m0.000982583s waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:43.913024   47365 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:43.913038   47365 pod_ready.go:38] duration metric: took 4m4.801748698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:43.913063   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:56:43.913101   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:43.913175   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:43.965196   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:43.965220   47365 cri.go:89] found id: ""
	I1205 20:56:43.965228   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:43.965272   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:43.970257   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:43.970353   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:44.026974   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.027005   47365 cri.go:89] found id: ""
	I1205 20:56:44.027015   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:44.027099   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.032107   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:44.032212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:44.075721   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:44.075758   47365 cri.go:89] found id: ""
	I1205 20:56:44.075766   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:44.075823   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.082125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:44.082212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:44.125099   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:44.125122   47365 cri.go:89] found id: ""
	I1205 20:56:44.125129   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:44.125171   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.129477   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:44.129538   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:44.180281   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.180305   47365 cri.go:89] found id: ""
	I1205 20:56:44.180313   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:44.180357   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.185094   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:44.185173   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:44.228693   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.228719   47365 cri.go:89] found id: ""
	I1205 20:56:44.228730   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:44.228786   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.233574   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:44.233687   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:44.279286   47365 cri.go:89] found id: ""
	I1205 20:56:44.279312   47365 logs.go:284] 0 containers: []
	W1205 20:56:44.279321   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:44.279328   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:44.279390   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:44.333572   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.333598   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:44.333605   47365 cri.go:89] found id: ""
	I1205 20:56:44.333614   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:44.333678   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.339080   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.343653   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:44.343687   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.412744   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:44.412785   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.457374   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:44.457402   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.521640   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:44.521676   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:44.536612   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:44.536636   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.586795   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:44.586836   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:45.065254   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:45.065293   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:45.126209   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:45.126242   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:45.166553   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:45.166580   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:45.214849   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:45.214887   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:45.371687   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:45.371732   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:45.417585   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:45.417615   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:45.455524   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:45.455559   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:44.965462   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.967433   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:47.272958   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.274398   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.621173   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.236869123s)
	I1205 20:56:46.621264   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:46.636086   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:46.647003   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:46.657201   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:46.657241   46866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:56:46.882231   46866 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:48.007463   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:56:48.023675   47365 api_server.go:72] duration metric: took 4m15.243410399s to wait for apiserver process to appear ...
	I1205 20:56:48.023713   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:56:48.023748   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:48.023818   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:48.067278   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.067301   47365 cri.go:89] found id: ""
	I1205 20:56:48.067308   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:48.067359   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.072370   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:48.072446   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:48.118421   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:48.118444   47365 cri.go:89] found id: ""
	I1205 20:56:48.118453   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:48.118509   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.123954   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:48.124019   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:48.173864   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:48.173890   47365 cri.go:89] found id: ""
	I1205 20:56:48.173900   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:48.173955   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.178717   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:48.178790   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:48.221891   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:48.221915   47365 cri.go:89] found id: ""
	I1205 20:56:48.221924   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:48.221985   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.226811   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:48.226886   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:48.271431   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:48.271454   47365 cri.go:89] found id: ""
	I1205 20:56:48.271463   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:48.271518   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.276572   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:48.276655   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:48.326438   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:48.326466   47365 cri.go:89] found id: ""
	I1205 20:56:48.326476   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:48.326534   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.334539   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:48.334611   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:48.377929   47365 cri.go:89] found id: ""
	I1205 20:56:48.377955   47365 logs.go:284] 0 containers: []
	W1205 20:56:48.377965   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:48.377973   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:48.378035   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:48.430599   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:48.430621   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:48.430629   47365 cri.go:89] found id: ""
	I1205 20:56:48.430638   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:48.430691   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.434882   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.439269   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:48.439299   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.495069   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:48.495113   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:48.955220   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:48.955257   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:48.971222   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:48.971246   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:49.108437   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:49.108470   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:49.150916   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:49.150940   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:49.207092   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:49.207141   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:49.251940   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:49.251969   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:49.293885   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:49.293918   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:49.349151   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:49.349187   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:49.403042   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:49.403079   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:49.466816   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:49.466858   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:49.525300   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:49.525341   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:49.467873   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.659950   46700 pod_ready.go:81] duration metric: took 4m0.000950283s waiting for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:49.659985   46700 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:49.660008   46700 pod_ready.go:38] duration metric: took 4m1.200539602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:49.660056   46700 kubeadm.go:640] restartCluster took 5m17.548124184s
	W1205 20:56:49.660130   46700 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:49.660162   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:51.776117   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:54.275521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:52.099610   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:56:52.106838   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:56:52.109813   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:56:52.109835   47365 api_server.go:131] duration metric: took 4.086114093s to wait for apiserver health ...
	I1205 20:56:52.109845   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:56:52.109874   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:52.109929   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:52.155290   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:52.155319   47365 cri.go:89] found id: ""
	I1205 20:56:52.155328   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:52.155382   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.160069   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:52.160137   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:52.197857   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.197885   47365 cri.go:89] found id: ""
	I1205 20:56:52.197894   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:52.197956   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.203012   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:52.203075   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:52.257881   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.257904   47365 cri.go:89] found id: ""
	I1205 20:56:52.257914   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:52.257972   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.264817   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:52.264899   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:52.313302   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.313331   47365 cri.go:89] found id: ""
	I1205 20:56:52.313341   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:52.313398   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.318864   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:52.318972   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:52.389306   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.389333   47365 cri.go:89] found id: ""
	I1205 20:56:52.389342   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:52.389400   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.406125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:52.406194   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:52.458735   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:52.458760   47365 cri.go:89] found id: ""
	I1205 20:56:52.458770   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:52.458821   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.463571   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:52.463642   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:52.529035   47365 cri.go:89] found id: ""
	I1205 20:56:52.529067   47365 logs.go:284] 0 containers: []
	W1205 20:56:52.529079   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:52.529088   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:52.529157   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:52.583543   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:52.583578   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.583585   47365 cri.go:89] found id: ""
	I1205 20:56:52.583594   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:52.583649   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.589299   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.595000   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:52.595024   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.671447   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:52.671487   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.719185   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:52.719223   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:52.780173   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:52.780203   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.823808   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:52.823843   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.874394   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:52.874428   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:52.938139   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:52.938177   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.982386   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:52.982414   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:53.029082   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:53.029111   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:53.447057   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:53.447099   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:53.465029   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:53.465066   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:53.627351   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:53.627400   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:53.694357   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:53.694393   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:56.267579   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:56:56.267614   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.267624   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.267631   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.267638   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.267644   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.267650   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.267660   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.267672   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.267683   47365 system_pods.go:74] duration metric: took 4.157830691s to wait for pod list to return data ...
	I1205 20:56:56.267696   47365 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:56:56.271148   47365 default_sa.go:45] found service account: "default"
	I1205 20:56:56.271170   47365 default_sa.go:55] duration metric: took 3.468435ms for default service account to be created ...
	I1205 20:56:56.271176   47365 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:56:56.277630   47365 system_pods.go:86] 8 kube-system pods found
	I1205 20:56:56.277654   47365 system_pods.go:89] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.277660   47365 system_pods.go:89] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.277665   47365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.277669   47365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.277674   47365 system_pods.go:89] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.277679   47365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.277688   47365 system_pods.go:89] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.277696   47365 system_pods.go:89] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.277715   47365 system_pods.go:126] duration metric: took 6.533492ms to wait for k8s-apps to be running ...
	I1205 20:56:56.277726   47365 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:56:56.277772   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:56.296846   47365 system_svc.go:56] duration metric: took 19.109991ms WaitForService to wait for kubelet.
	I1205 20:56:56.296877   47365 kubeadm.go:581] duration metric: took 4m23.516618576s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:56:56.296902   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:56:56.301504   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:56:56.301530   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:56:56.301542   47365 node_conditions.go:105] duration metric: took 4.634882ms to run NodePressure ...
	I1205 20:56:56.301552   47365 start.go:228] waiting for startup goroutines ...
	I1205 20:56:56.301560   47365 start.go:233] waiting for cluster config update ...
	I1205 20:56:56.301573   47365 start.go:242] writing updated cluster config ...
	I1205 20:56:56.301859   47365 ssh_runner.go:195] Run: rm -f paused
	I1205 20:56:56.357189   47365 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:56:56.358798   47365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-463614" cluster and "default" namespace by default
	I1205 20:56:54.756702   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096505481s)
	I1205 20:56:54.756786   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:54.774684   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:54.786308   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:54.796762   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:54.796809   46700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1205 20:56:55.081318   46700 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:58.569752   46866 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1205 20:56:58.569873   46866 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:56:58.569988   46866 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:56:58.570119   46866 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:56:58.570261   46866 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:56:58.570368   46866 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:56:58.572785   46866 out.go:204]   - Generating certificates and keys ...
	I1205 20:56:58.573020   46866 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:56:58.573232   46866 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:56:58.573410   46866 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:56:58.573510   46866 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:56:58.573717   46866 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:56:58.573868   46866 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:56:58.574057   46866 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:56:58.574229   46866 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:56:58.574517   46866 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:56:58.574760   46866 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:56:58.574903   46866 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:56:58.575070   46866 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:56:58.575205   46866 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:56:58.575363   46866 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:56:58.575515   46866 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:56:58.575600   46866 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:56:58.575799   46866 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:56:58.576083   46866 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:56:58.576320   46866 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:56:58.580654   46866 out.go:204]   - Booting up control plane ...
	I1205 20:56:58.581337   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:56:58.581851   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:56:58.582029   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:56:58.582667   46866 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:56:58.582988   46866 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:56:58.583126   46866 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:56:58.583631   46866 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:56:58.583908   46866 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502137 seconds
	I1205 20:56:58.584157   46866 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:56:58.584637   46866 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:56:58.584882   46866 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:56:58.585370   46866 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:56:58.585492   46866 kubeadm.go:322] [bootstrap-token] Using token: fap3k3.pr3uz4d90n7oyvds
	I1205 20:56:58.590063   46866 out.go:204]   - Configuring RBAC rules ...
	I1205 20:56:58.590356   46866 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:56:58.590482   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:56:58.590692   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:56:58.590887   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:56:58.591031   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:56:58.591131   46866 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:56:58.591269   46866 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:56:58.591323   46866 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:56:58.591378   46866 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:56:58.591383   46866 kubeadm.go:322] 
	I1205 20:56:58.591455   46866 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:56:58.591462   46866 kubeadm.go:322] 
	I1205 20:56:58.591554   46866 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:56:58.591559   46866 kubeadm.go:322] 
	I1205 20:56:58.591590   46866 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:56:58.591659   46866 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:56:58.591719   46866 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:56:58.591724   46866 kubeadm.go:322] 
	I1205 20:56:58.591787   46866 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:56:58.591793   46866 kubeadm.go:322] 
	I1205 20:56:58.591848   46866 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:56:58.591853   46866 kubeadm.go:322] 
	I1205 20:56:58.591914   46866 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:56:58.592015   46866 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:56:58.592093   46866 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:56:58.592099   46866 kubeadm.go:322] 
	I1205 20:56:58.592197   46866 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:56:58.592300   46866 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:56:58.592306   46866 kubeadm.go:322] 
	I1205 20:56:58.592403   46866 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592525   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:56:58.592550   46866 kubeadm.go:322] 	--control-plane 
	I1205 20:56:58.592558   46866 kubeadm.go:322] 
	I1205 20:56:58.592645   46866 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:56:58.592650   46866 kubeadm.go:322] 
	I1205 20:56:58.592743   46866 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592870   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:56:58.592880   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:56:58.592889   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:56:58.594456   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:56:56.773764   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.778395   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.595862   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:56:58.625177   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:56:58.683896   46866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:56:58.683977   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.684060   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=no-preload-143651 minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.741242   46866 ops.go:34] apiserver oom_adj: -16
	I1205 20:56:59.114129   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.238212   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.869086   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:00.368538   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.272299   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:03.272604   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:04.992619   46374 pod_ready.go:81] duration metric: took 4m0.000553964s waiting for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:04.992652   46374 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:57:04.992691   46374 pod_ready.go:38] duration metric: took 4m13.609186276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:04.992726   46374 kubeadm.go:640] restartCluster took 4m33.586092425s
	W1205 20:57:04.992782   46374 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:57:04.992808   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:57:00.868500   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.369084   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.368409   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.869341   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.368765   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.869054   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.368855   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.869144   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:05.368635   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.047040   46700 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1205 20:57:09.047132   46700 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:09.047236   46700 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:09.047350   46700 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:09.047462   46700 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:09.047583   46700 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:09.047693   46700 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:09.047752   46700 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1205 20:57:09.047825   46700 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:09.049606   46700 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:09.049706   46700 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:09.049802   46700 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:09.049885   46700 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:09.049963   46700 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:09.050058   46700 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:09.050148   46700 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:09.050235   46700 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:09.050350   46700 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:09.050468   46700 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:09.050563   46700 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:09.050627   46700 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:09.050732   46700 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:09.050817   46700 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:09.050897   46700 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:09.050997   46700 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:09.051080   46700 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:09.051165   46700 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:09.052610   46700 out.go:204]   - Booting up control plane ...
	I1205 20:57:09.052722   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:09.052806   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:09.052870   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:09.052965   46700 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:09.053103   46700 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:09.053203   46700 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005642 seconds
	I1205 20:57:09.053354   46700 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:09.053514   46700 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:09.053563   46700 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:09.053701   46700 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-061206 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 20:57:09.053783   46700 kubeadm.go:322] [bootstrap-token] Using token: syik3l.i77juzhd1iybx3my
	I1205 20:57:09.055286   46700 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:09.055409   46700 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:09.055599   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:09.055749   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:09.055862   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:09.055982   46700 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:09.056043   46700 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:09.056106   46700 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:09.056116   46700 kubeadm.go:322] 
	I1205 20:57:09.056197   46700 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:09.056207   46700 kubeadm.go:322] 
	I1205 20:57:09.056307   46700 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:09.056329   46700 kubeadm.go:322] 
	I1205 20:57:09.056377   46700 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:09.056456   46700 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:09.056533   46700 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:09.056540   46700 kubeadm.go:322] 
	I1205 20:57:09.056600   46700 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:09.056669   46700 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:09.056729   46700 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:09.056737   46700 kubeadm.go:322] 
	I1205 20:57:09.056804   46700 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1205 20:57:09.056868   46700 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:09.056874   46700 kubeadm.go:322] 
	I1205 20:57:09.056944   46700 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057093   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:09.057135   46700 kubeadm.go:322]     --control-plane 	  
	I1205 20:57:09.057150   46700 kubeadm.go:322] 
	I1205 20:57:09.057252   46700 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:09.057260   46700 kubeadm.go:322] 
	I1205 20:57:09.057360   46700 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057502   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:09.057514   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:57:09.057520   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:09.058762   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:05.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.368434   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.869228   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.369175   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.868933   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.369028   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.868920   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.369223   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.869130   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.369240   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.869318   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.369189   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.576975   46866 kubeadm.go:1088] duration metric: took 12.893071134s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:11.577015   46866 kubeadm.go:406] StartCluster complete in 5m9.690903424s
	I1205 20:57:11.577039   46866 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.577129   46866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:11.579783   46866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.580131   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:11.580364   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:57:11.580360   46866 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:11.580446   46866 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143651"
	I1205 20:57:11.580467   46866 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143651"
	W1205 20:57:11.580479   46866 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:11.580518   46866 addons.go:69] Setting metrics-server=true in profile "no-preload-143651"
	I1205 20:57:11.580535   46866 addons.go:231] Setting addon metrics-server=true in "no-preload-143651"
	W1205 20:57:11.580544   46866 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:11.580575   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580583   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580982   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580994   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580497   46866 addons.go:69] Setting default-storageclass=true in profile "no-preload-143651"
	I1205 20:57:11.581018   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581027   46866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143651"
	I1205 20:57:11.581303   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581357   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.581383   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.600887   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1205 20:57:11.600886   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I1205 20:57:11.601552   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601681   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601760   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I1205 20:57:11.602152   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602177   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602260   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.602348   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602370   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602603   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602719   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602806   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.602996   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.603020   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.603329   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.603379   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.603477   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.603997   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.604040   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.606962   46866 addons.go:231] Setting addon default-storageclass=true in "no-preload-143651"
	W1205 20:57:11.606986   46866 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:11.607009   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.607331   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.607363   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.624885   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I1205 20:57:11.625358   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.625857   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.625869   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.626331   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.626627   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I1205 20:57:11.626832   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.627179   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.631282   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I1205 20:57:11.632431   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.632516   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.632599   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.632763   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.633113   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.633639   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.633883   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.634495   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.634539   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.634823   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.637060   46866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:11.635196   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.641902   46866 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:11.641932   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:11.641960   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.642616   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.644862   46866 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:11.647090   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:11.647113   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:11.647134   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.646852   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647539   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.647564   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647755   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.648063   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.648295   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.648520   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.654458   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.654493   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654522   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.654556   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654801   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.655015   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.655247   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.661244   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I1205 20:57:11.661886   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.662508   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.662534   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.663651   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.663907   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.666067   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.666501   46866 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.666523   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:11.666543   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.669659   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670106   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.670132   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670479   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.670673   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.670802   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.670915   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.816687   46866 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143651" context rescaled to 1 replicas
	I1205 20:57:11.816742   46866 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:11.820014   46866 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:09.060305   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:09.069861   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:09.093691   46700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:09.093847   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.093914   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=old-k8s-version-061206 minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.123857   46700 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:09.315555   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.435904   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.049845   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.549703   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.049931   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.549848   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.049776   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.549841   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.050053   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.549531   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.821903   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:11.831116   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:11.867528   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.969463   46866 node_ready.go:35] waiting up to 6m0s for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:11.976207   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:11.976235   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:11.977230   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:12.003110   46866 node_ready.go:49] node "no-preload-143651" has status "Ready":"True"
	I1205 20:57:12.003132   46866 node_ready.go:38] duration metric: took 33.629273ms waiting for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:12.003142   46866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:12.053173   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:12.053208   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:12.140411   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:12.170492   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.170521   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:12.251096   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.778963   46866 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:12.779026   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779040   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779377   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779402   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.779411   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779411   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.779418   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779625   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779665   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.786021   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.786045   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.786331   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.786380   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.786400   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194477   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217217088s)
	I1205 20:57:13.194529   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194543   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.194883   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.194929   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.194948   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194960   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194970   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.195198   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.195212   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562441   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311301688s)
	I1205 20:57:13.562496   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562512   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.562826   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.562845   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562856   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562865   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.563115   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.563164   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.563177   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.563190   46866 addons.go:467] Verifying addon metrics-server=true in "no-preload-143651"
	I1205 20:57:13.564940   46866 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:13.566316   46866 addons.go:502] enable addons completed in 1.985974766s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:14.389400   46866 pod_ready.go:102] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:15.388445   46866 pod_ready.go:92] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.388478   46866 pod_ready.go:81] duration metric: took 3.248030471s waiting for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.388493   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.391728   46866 pod_ready.go:97] error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391759   46866 pod_ready.go:81] duration metric: took 3.251498ms waiting for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:15.391772   46866 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391781   46866 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399725   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.399745   46866 pod_ready.go:81] duration metric: took 7.956804ms waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399759   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407412   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.407436   46866 pod_ready.go:81] duration metric: took 7.672123ms waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407446   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414249   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.414295   46866 pod_ready.go:81] duration metric: took 6.840313ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414309   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587237   46866 pod_ready.go:92] pod "kube-proxy-6txsz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.587271   46866 pod_ready.go:81] duration metric: took 172.95478ms waiting for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587286   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985901   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.985930   46866 pod_ready.go:81] duration metric: took 398.634222ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985943   46866 pod_ready.go:38] duration metric: took 3.982790764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:15.985960   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:15.986019   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:16.009052   46866 api_server.go:72] duration metric: took 4.192253908s to wait for apiserver process to appear ...
	I1205 20:57:16.009082   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:16.009100   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:57:16.014689   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:57:16.015758   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:57:16.015781   46866 api_server.go:131] duration metric: took 6.691652ms to wait for apiserver health ...
	I1205 20:57:16.015791   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:16.188198   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:16.188232   46866 system_pods.go:61] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.188240   46866 system_pods.go:61] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.188246   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.188254   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.188261   46866 system_pods.go:61] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.188267   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.188279   46866 system_pods.go:61] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.188290   46866 system_pods.go:61] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.188301   46866 system_pods.go:74] duration metric: took 172.503422ms to wait for pod list to return data ...
	I1205 20:57:16.188311   46866 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:16.384722   46866 default_sa.go:45] found service account: "default"
	I1205 20:57:16.384759   46866 default_sa.go:55] duration metric: took 196.435091ms for default service account to be created ...
	I1205 20:57:16.384769   46866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:16.587515   46866 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:16.587542   46866 system_pods.go:89] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.587547   46866 system_pods.go:89] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.587554   46866 system_pods.go:89] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.587561   46866 system_pods.go:89] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.587567   46866 system_pods.go:89] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.587574   46866 system_pods.go:89] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.587585   46866 system_pods.go:89] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.587593   46866 system_pods.go:89] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.587604   46866 system_pods.go:126] duration metric: took 202.829744ms to wait for k8s-apps to be running ...
	I1205 20:57:16.587613   46866 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:16.587654   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:16.602489   46866 system_svc.go:56] duration metric: took 14.864421ms WaitForService to wait for kubelet.
	I1205 20:57:16.602521   46866 kubeadm.go:581] duration metric: took 4.785728725s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:16.602545   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:16.785610   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:16.785646   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:16.785663   46866 node_conditions.go:105] duration metric: took 183.112914ms to run NodePressure ...
	I1205 20:57:16.785677   46866 start.go:228] waiting for startup goroutines ...
	I1205 20:57:16.785686   46866 start.go:233] waiting for cluster config update ...
	I1205 20:57:16.785705   46866 start.go:242] writing updated cluster config ...
	I1205 20:57:16.786062   46866 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:16.840981   46866 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1205 20:57:16.842980   46866 out.go:177] * Done! kubectl is now configured to use "no-preload-143651" cluster and "default" namespace by default
	I1205 20:57:14.049305   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:14.549423   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.050061   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.550221   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.049450   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.550094   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.049900   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.549923   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.050255   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.549399   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.615362   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.62253521s)
	I1205 20:57:19.615425   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:19.633203   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:57:19.643629   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:57:19.653655   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:57:19.653717   46374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:57:19.709748   46374 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:57:19.709836   46374 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:19.887985   46374 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:19.888143   46374 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:19.888243   46374 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:20.145182   46374 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:20.147189   46374 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:20.147319   46374 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:20.147389   46374 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:20.147482   46374 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:20.147875   46374 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:20.148583   46374 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:20.149486   46374 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:20.150362   46374 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:20.150974   46374 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:20.151523   46374 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:20.152166   46374 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:20.152419   46374 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:20.152504   46374 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:20.435395   46374 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:20.606951   46374 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:20.754435   46374 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:20.953360   46374 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:20.954288   46374 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:20.958413   46374 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:19.049689   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.549608   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.049856   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.550245   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.050001   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.549839   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.049908   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.549764   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.050204   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.550196   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.049420   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.550152   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.050103   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.202067   46700 kubeadm.go:1088] duration metric: took 16.108268519s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:25.202100   46700 kubeadm.go:406] StartCluster complete in 5m53.142100786s
	I1205 20:57:25.202121   46700 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.202211   46700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:25.204920   46700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.205284   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:25.205635   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:57:25.205792   46700 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:25.205865   46700 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-061206"
	I1205 20:57:25.205888   46700 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-061206"
	W1205 20:57:25.205896   46700 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:25.205954   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.205982   46700 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206011   46700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-061206"
	I1205 20:57:25.206429   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206436   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206457   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206459   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206517   46700 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206531   46700 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-061206"
	W1205 20:57:25.206538   46700 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:25.206578   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.206906   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206936   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.228876   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I1205 20:57:25.228902   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1205 20:57:25.229036   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I1205 20:57:25.229487   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229569   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229646   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.230209   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230230   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230413   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230426   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230468   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230492   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230851   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.231494   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.231520   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.231955   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.232544   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.232578   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.233084   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.233307   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.237634   46700 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-061206"
	W1205 20:57:25.237660   46700 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:25.237691   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.238103   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.238138   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.252274   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I1205 20:57:25.252709   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.253307   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.253327   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.253689   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.253874   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.255891   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.258376   46700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:25.256849   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I1205 20:57:25.260119   46700 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.260145   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:25.260168   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.261358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.262042   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.262063   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.262590   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.262765   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.265705   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.265905   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.267942   46700 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:25.266347   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.266528   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.269653   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.269661   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:25.269687   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:25.269708   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.270383   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.270602   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.270764   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.274415   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.274914   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.274939   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.275267   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.275451   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.275594   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.275736   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.282847   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1205 20:57:25.283552   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.284174   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.284192   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.284659   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.285434   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.285469   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.306845   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1205 20:57:25.307358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.307884   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.307905   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.308302   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.308605   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.310363   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.310649   46700 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.310663   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:25.310682   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.313904   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314451   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.314482   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314756   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.314941   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.315053   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.315153   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.456874   46700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-061206" context rescaled to 1 replicas
	I1205 20:57:25.456922   46700 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:25.459008   46700 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:20.960444   46374 out.go:204]   - Booting up control plane ...
	I1205 20:57:20.960603   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:20.960721   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:20.961220   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:20.981073   46374 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:20.982383   46374 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:20.982504   46374 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:57:21.127167   46374 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:25.460495   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:25.531367   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.531600   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:25.531618   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:25.543589   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.624622   46700 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.624655   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:25.660979   46700 node_ready.go:49] node "old-k8s-version-061206" has status "Ready":"True"
	I1205 20:57:25.661005   46700 node_ready.go:38] duration metric: took 36.286483ms waiting for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.661017   46700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:25.666179   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:25.666208   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:25.796077   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:26.018114   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.018141   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:26.124357   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.905138   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.37373154s)
	I1205 20:57:26.905210   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905526   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905553   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.905567   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905576   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:26.905905   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905917   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964563   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.964593   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.964920   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.964940   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964974   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465231   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.92160273s)
	I1205 20:57:27.465236   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.840348969s)
	I1205 20:57:27.465312   46700 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:27.465289   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465379   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.465718   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465761   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.465771   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.465780   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465790   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.467788   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.467820   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.467829   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628166   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.503702639s)
	I1205 20:57:27.628242   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628262   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628592   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628617   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628627   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628637   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628714   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.628851   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628866   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628885   46700 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-061206"
	I1205 20:57:27.632134   46700 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:27.634065   46700 addons.go:502] enable addons completed in 2.428270131s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:28.052082   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:29.630980   46374 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503524 seconds
	I1205 20:57:29.631109   46374 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:29.651107   46374 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:30.184174   46374 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:30.184401   46374 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-331495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:57:30.703275   46374 kubeadm.go:322] [bootstrap-token] Using token: 28cbrl.nve3765a0enwbcr0
	I1205 20:57:30.705013   46374 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:30.705155   46374 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:30.718386   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:57:30.727275   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:30.734448   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:30.741266   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:30.746706   46374 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:30.765198   46374 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:57:31.046194   46374 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:31.133417   46374 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:31.133438   46374 kubeadm.go:322] 
	I1205 20:57:31.133501   46374 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:31.133509   46374 kubeadm.go:322] 
	I1205 20:57:31.133647   46374 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:31.133667   46374 kubeadm.go:322] 
	I1205 20:57:31.133707   46374 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:31.133781   46374 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:31.133853   46374 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:31.133863   46374 kubeadm.go:322] 
	I1205 20:57:31.133918   46374 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:57:31.133925   46374 kubeadm.go:322] 
	I1205 20:57:31.133983   46374 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:57:31.133993   46374 kubeadm.go:322] 
	I1205 20:57:31.134042   46374 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:31.134103   46374 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:31.134262   46374 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:31.134300   46374 kubeadm.go:322] 
	I1205 20:57:31.134417   46374 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:57:31.134526   46374 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:31.134541   46374 kubeadm.go:322] 
	I1205 20:57:31.134671   46374 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.134823   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:31.134858   46374 kubeadm.go:322] 	--control-plane 
	I1205 20:57:31.134867   46374 kubeadm.go:322] 
	I1205 20:57:31.134986   46374 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:31.134997   46374 kubeadm.go:322] 
	I1205 20:57:31.135114   46374 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.135272   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:31.135908   46374 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:57:31.135934   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:57:31.135944   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:31.137845   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:30.540402   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:33.040756   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:31.139429   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:31.181897   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:31.202833   46374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:31.202901   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.202910   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=embed-certs-331495 minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.298252   46374 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:31.569929   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.694250   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.294912   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.795323   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.295495   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.794998   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.294843   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.794730   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:35.295505   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.538542   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.538568   46700 pod_ready.go:81] duration metric: took 8.742457359s waiting for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.538579   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.540738   46700 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540763   46700 pod_ready.go:81] duration metric: took 2.177251ms waiting for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:34.540771   46700 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540777   46700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545336   46700 pod_ready.go:92] pod "kube-proxy-j68qr" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.545360   46700 pod_ready.go:81] duration metric: took 4.576584ms waiting for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545370   46700 pod_ready.go:38] duration metric: took 8.884340587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:34.545387   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:34.545442   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:34.561744   46700 api_server.go:72] duration metric: took 9.104792218s to wait for apiserver process to appear ...
	I1205 20:57:34.561769   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:34.561786   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:57:34.568456   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:57:34.569584   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:57:34.569608   46700 api_server.go:131] duration metric: took 7.832231ms to wait for apiserver health ...
	I1205 20:57:34.569618   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:34.573936   46700 system_pods.go:59] 4 kube-system pods found
	I1205 20:57:34.573962   46700 system_pods.go:61] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.573969   46700 system_pods.go:61] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.573979   46700 system_pods.go:61] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.573989   46700 system_pods.go:61] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.574004   46700 system_pods.go:74] duration metric: took 4.378461ms to wait for pod list to return data ...
	I1205 20:57:34.574016   46700 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:34.577236   46700 default_sa.go:45] found service account: "default"
	I1205 20:57:34.577258   46700 default_sa.go:55] duration metric: took 3.232577ms for default service account to be created ...
	I1205 20:57:34.577268   46700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:34.581061   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.581080   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.581086   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.581093   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.581098   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.581112   46700 retry.go:31] will retry after 312.287284ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:34.898504   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.898531   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.898536   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.898545   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.898549   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.898563   46700 retry.go:31] will retry after 340.858289ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.244211   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.244237   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.244242   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.244249   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.244253   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.244267   46700 retry.go:31] will retry after 398.30611ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.649011   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.649042   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.649050   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.649061   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.649068   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.649086   46700 retry.go:31] will retry after 397.404602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.052047   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.052079   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.052087   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.052097   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.052105   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.052124   46700 retry.go:31] will retry after 604.681853ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.662177   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.662206   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.662213   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.662223   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.662229   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.662247   46700 retry.go:31] will retry after 732.227215ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:37.399231   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:37.399264   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:37.399272   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:37.399282   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:37.399289   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:37.399308   46700 retry.go:31] will retry after 1.17612773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.795241   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.295081   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.795352   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.295506   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.794785   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.294797   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.794948   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.295478   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.795706   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:40.295444   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.581173   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:38.581201   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:38.581207   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:38.581220   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:38.581225   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:38.581239   46700 retry.go:31] will retry after 1.118915645s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:39.704807   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:39.704835   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:39.704841   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:39.704847   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:39.704854   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:39.704872   46700 retry.go:31] will retry after 1.49556329s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:41.205278   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:41.205316   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:41.205324   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:41.205331   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:41.205336   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:41.205357   46700 retry.go:31] will retry after 2.273757829s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:43.485079   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:43.485109   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:43.485125   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:43.485132   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:43.485137   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:43.485153   46700 retry.go:31] will retry after 2.2120181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:40.794725   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.295631   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.795542   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.295514   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.795481   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.295525   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.795463   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.295442   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.451570   46374 kubeadm.go:1088] duration metric: took 13.248732973s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:44.451605   46374 kubeadm.go:406] StartCluster complete in 5m13.096778797s
	I1205 20:57:44.451631   46374 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.451730   46374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:44.454306   46374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.454587   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:44.454611   46374 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:44.454695   46374 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-331495"
	I1205 20:57:44.454720   46374 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-331495"
	W1205 20:57:44.454731   46374 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:44.454766   46374 addons.go:69] Setting default-storageclass=true in profile "embed-certs-331495"
	I1205 20:57:44.454781   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.454783   46374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-331495"
	I1205 20:57:44.454840   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:57:44.454884   46374 addons.go:69] Setting metrics-server=true in profile "embed-certs-331495"
	I1205 20:57:44.454899   46374 addons.go:231] Setting addon metrics-server=true in "embed-certs-331495"
	W1205 20:57:44.454907   46374 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:44.454949   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.455191   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455213   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455216   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455231   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455237   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455259   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.473063   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I1205 20:57:44.473083   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I1205 20:57:44.473135   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I1205 20:57:44.473509   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.473642   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474153   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474171   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474179   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474197   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474336   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474566   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474637   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474761   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474785   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474877   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.475234   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475260   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.475295   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.475833   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475871   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.478828   46374 addons.go:231] Setting addon default-storageclass=true in "embed-certs-331495"
	W1205 20:57:44.478852   46374 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:44.478882   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.479277   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.479311   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.493193   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1205 20:57:44.493380   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1205 20:57:44.493637   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.493775   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.494092   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494108   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494242   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494252   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494488   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494624   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.494682   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.496908   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.497156   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.498954   46374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:44.500583   46374 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:44.499205   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1205 20:57:44.502186   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:44.502199   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:44.502214   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.502313   46374 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.502329   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:44.502349   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.503728   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.504065   46374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-331495" context rescaled to 1 replicas
	I1205 20:57:44.504105   46374 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:44.505773   46374 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:44.507622   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:44.505350   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.507719   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.505638   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.507792   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.507821   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.506710   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.507399   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508237   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.508287   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508353   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.508369   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508440   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.508506   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.508671   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508678   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.508996   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.509016   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.509373   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.509567   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.525720   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I1205 20:57:44.526352   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.526817   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.526831   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.527096   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.527248   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.529415   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.529714   46374 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.529725   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:44.529737   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.532475   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533019   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.533042   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.533393   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.533527   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.533614   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.688130   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:44.688235   46374 node_ready.go:35] waiting up to 6m0s for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727420   46374 node_ready.go:49] node "embed-certs-331495" has status "Ready":"True"
	I1205 20:57:44.727442   46374 node_ready.go:38] duration metric: took 39.185885ms waiting for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727450   46374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:44.732130   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:44.732147   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:44.738201   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.771438   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.811415   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:44.811441   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:44.813276   46374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:44.891164   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:44.891188   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:44.982166   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:46.640482   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.952307207s)
	I1205 20:57:46.640514   46374 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:46.640492   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.902257941s)
	I1205 20:57:46.640549   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640567   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.640954   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.640974   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.640985   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640994   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.641299   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.641316   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.641317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669046   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.669072   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.669393   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669467   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.669486   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229043   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.457564146s)
	I1205 20:57:47.229106   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229122   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.229427   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.229442   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229451   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229460   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.230375   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:47.230383   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.230399   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.269645   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.287430037s)
	I1205 20:57:47.269701   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.269717   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270028   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270044   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270053   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.270062   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270370   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270387   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270397   46374 addons.go:467] Verifying addon metrics-server=true in "embed-certs-331495"
	I1205 20:57:47.272963   46374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:45.704352   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:45.704382   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:45.704392   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:45.704402   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:45.704408   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:45.704427   46700 retry.go:31] will retry after 3.581529213s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:47.274340   46374 addons.go:502] enable addons completed in 2.819728831s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:47.280325   46374 pod_ready.go:102] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:48.746184   46374 pod_ready.go:92] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.746205   46374 pod_ready.go:81] duration metric: took 3.932903963s waiting for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.746212   46374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752060   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.752078   46374 pod_ready.go:81] duration metric: took 5.859638ms waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752088   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757347   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.757367   46374 pod_ready.go:81] duration metric: took 5.273527ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757375   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762850   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.762869   46374 pod_ready.go:81] duration metric: took 5.4878ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762876   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767874   46374 pod_ready.go:92] pod "kube-proxy-tbr8k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.767896   46374 pod_ready.go:81] duration metric: took 5.013139ms waiting for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767907   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141813   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:49.141836   46374 pod_ready.go:81] duration metric: took 373.922185ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141844   46374 pod_ready.go:38] duration metric: took 4.414384404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:49.141856   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:49.141898   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:49.156536   46374 api_server.go:72] duration metric: took 4.652397468s to wait for apiserver process to appear ...
	I1205 20:57:49.156566   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:49.156584   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:57:49.162837   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:57:49.164588   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:57:49.164606   46374 api_server.go:131] duration metric: took 8.03498ms to wait for apiserver health ...
	I1205 20:57:49.164613   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:49.346033   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:49.346065   46374 system_pods.go:61] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.346069   46374 system_pods.go:61] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.346074   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.346079   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.346082   46374 system_pods.go:61] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.346086   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.346092   46374 system_pods.go:61] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.346098   46374 system_pods.go:61] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:57:49.346105   46374 system_pods.go:74] duration metric: took 181.48718ms to wait for pod list to return data ...
	I1205 20:57:49.346111   46374 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:49.541758   46374 default_sa.go:45] found service account: "default"
	I1205 20:57:49.541783   46374 default_sa.go:55] duration metric: took 195.666774ms for default service account to be created ...
	I1205 20:57:49.541791   46374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:49.746101   46374 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:49.746131   46374 system_pods.go:89] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.746136   46374 system_pods.go:89] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.746142   46374 system_pods.go:89] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.746147   46374 system_pods.go:89] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.746150   46374 system_pods.go:89] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.746155   46374 system_pods.go:89] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.746170   46374 system_pods.go:89] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.746175   46374 system_pods.go:89] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Running
	I1205 20:57:49.746183   46374 system_pods.go:126] duration metric: took 204.388635ms to wait for k8s-apps to be running ...
	I1205 20:57:49.746193   46374 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:49.746241   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:49.764758   46374 system_svc.go:56] duration metric: took 18.554759ms WaitForService to wait for kubelet.
	I1205 20:57:49.764784   46374 kubeadm.go:581] duration metric: took 5.260652386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:49.764801   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:49.942067   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:49.942095   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:49.942105   46374 node_conditions.go:105] duration metric: took 177.300297ms to run NodePressure ...
	I1205 20:57:49.942114   46374 start.go:228] waiting for startup goroutines ...
	I1205 20:57:49.942120   46374 start.go:233] waiting for cluster config update ...
	I1205 20:57:49.942129   46374 start.go:242] writing updated cluster config ...
	I1205 20:57:49.942407   46374 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:49.995837   46374 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:57:49.997691   46374 out.go:177] * Done! kubectl is now configured to use "embed-certs-331495" cluster and "default" namespace by default
	I1205 20:57:49.291672   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:49.291700   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:49.291705   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:49.291713   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.291718   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:49.291736   46700 retry.go:31] will retry after 3.015806566s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:52.313677   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:52.313703   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:52.313711   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:52.313721   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:52.313727   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:52.313747   46700 retry.go:31] will retry after 4.481475932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:56.804282   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:56.804308   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:56.804314   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:56.804321   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:56.804325   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:56.804340   46700 retry.go:31] will retry after 6.744179014s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:03.556623   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:58:03.556652   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:03.556660   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:03.556669   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:03.556676   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:03.556696   46700 retry.go:31] will retry after 7.974872066s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:11.540488   46700 system_pods.go:86] 6 kube-system pods found
	I1205 20:58:11.540516   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:11.540522   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Pending
	I1205 20:58:11.540526   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Pending
	I1205 20:58:11.540530   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:11.540537   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:11.540541   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:11.540556   46700 retry.go:31] will retry after 10.29278609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:21.841415   46700 system_pods.go:86] 7 kube-system pods found
	I1205 20:58:21.841442   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:21.841450   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:21.841457   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:21.841463   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:21.841468   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:21.841478   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:21.841485   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:21.841503   46700 retry.go:31] will retry after 10.997616244s: missing components: kube-scheduler
	I1205 20:58:32.846965   46700 system_pods.go:86] 8 kube-system pods found
	I1205 20:58:32.846999   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:32.847007   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:32.847016   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:32.847023   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:32.847028   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:32.847032   46700 system_pods.go:89] "kube-scheduler-old-k8s-version-061206" [e19a40ac-ac9b-4dc8-8ed3-c13da266bb88] Running
	I1205 20:58:32.847041   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:32.847049   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:32.847061   46700 system_pods.go:126] duration metric: took 58.26978612s to wait for k8s-apps to be running ...
	I1205 20:58:32.847074   46700 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:58:32.847122   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:58:32.866233   46700 system_svc.go:56] duration metric: took 19.150294ms WaitForService to wait for kubelet.
	I1205 20:58:32.866267   46700 kubeadm.go:581] duration metric: took 1m7.409317219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:58:32.866308   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:58:32.870543   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:58:32.870569   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:58:32.870581   46700 node_conditions.go:105] duration metric: took 4.266682ms to run NodePressure ...
	I1205 20:58:32.870604   46700 start.go:228] waiting for startup goroutines ...
	I1205 20:58:32.870630   46700 start.go:233] waiting for cluster config update ...
	I1205 20:58:32.870646   46700 start.go:242] writing updated cluster config ...
	I1205 20:58:32.870888   46700 ssh_runner.go:195] Run: rm -f paused
	I1205 20:58:32.922554   46700 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1205 20:58:32.924288   46700 out.go:177] 
	W1205 20:58:32.925788   46700 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1205 20:58:32.927148   46700 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1205 20:58:32.928730   46700 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-061206" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:52:15 UTC, ends at Tue 2023-12-05 21:06:51 UTC. --
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.663249022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810411663231367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9bcfea27-966e-4520-a240-f33ba073664a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.663929899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a25d160-27b1-4061-b488-3a1a93c4f05f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.664005676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a25d160-27b1-4061-b488-3a1a93c4f05f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.664223962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a25d160-27b1-4061-b488-3a1a93c4f05f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.709404980Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c748afb8-cae5-464d-99f0-165127c9b878 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.709494736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c748afb8-cae5-464d-99f0-165127c9b878 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.710728940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a0545e98-d623-4e21-9ecf-f6eaedf65321 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.711336878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810411711312185,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a0545e98-d623-4e21-9ecf-f6eaedf65321 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.711918943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=97db63dc-8d57-41d2-ab96-5ee869005a82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.711997615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=97db63dc-8d57-41d2-ab96-5ee869005a82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.712330613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=97db63dc-8d57-41d2-ab96-5ee869005a82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.757060403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=345796c3-02f0-4398-b5c4-c33d4baf99ea name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.757204227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=345796c3-02f0-4398-b5c4-c33d4baf99ea name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.758579607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0b3c2a24-c446-4004-a4a0-3134ed218ed5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.759494699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810411759478645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0b3c2a24-c446-4004-a4a0-3134ed218ed5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.760262206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a7537828-7436-4ef9-8973-0ed4ab3d87f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.760307162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a7537828-7436-4ef9-8973-0ed4ab3d87f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.760456750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a7537828-7436-4ef9-8973-0ed4ab3d87f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.796986127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c86b4d00-f404-4ffa-807c-bf7cdf5c6da2 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.797151678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c86b4d00-f404-4ffa-807c-bf7cdf5c6da2 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.798792128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f9394883-5fb1-4227-be50-e374e530596b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.799327440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810411799309231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f9394883-5fb1-4227-be50-e374e530596b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.800174552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=47e4a0e2-b7e5-4cf3-90c9-047bcd91e012 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.800224608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=47e4a0e2-b7e5-4cf3-90c9-047bcd91e012 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:06:51 embed-certs-331495 crio[715]: time="2023-12-05 21:06:51.800416468Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47e4a0e2-b7e5-4cf3-90c9-047bcd91e012 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	182d80c604bcb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   de22a871f7b64       storage-provisioner
	bd8c3411f3f5b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   0fd4abd8da6c4       coredns-5dd5756b68-6d7wq
	0ae4b48879d4a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   d237f5479c8bd       kube-proxy-tbr8k
	d780a6357dc09       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   7f87da88bd930       kube-scheduler-embed-certs-331495
	97a6cce6fb0ca       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   4e711bf51b91a       etcd-embed-certs-331495
	3eae6f73bd78e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   78292bfed715c       kube-controller-manager-embed-certs-331495
	dbd0f55bdb24e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   d09b800aa8f8b       kube-apiserver-embed-certs-331495
	
	* 
	* ==> coredns [bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-331495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-331495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=embed-certs-331495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:57:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-331495
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 21:06:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:02:57 +0000   Tue, 05 Dec 2023 20:57:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:02:57 +0000   Tue, 05 Dec 2023 20:57:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:02:57 +0000   Tue, 05 Dec 2023 20:57:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:02:57 +0000   Tue, 05 Dec 2023 20:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.180
	  Hostname:    embed-certs-331495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 fefaa329554e4f489cf4b02aa9a4e7a7
	  System UUID:                fefaa329-554e-4f48-9cf4-b02aa9a4e7a7
	  Boot ID:                    96e331ea-2fcf-49e4-8546-22ef663c0c0b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6d7wq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-331495                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-331495             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-331495    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-tbr8k                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-331495             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-wv2t6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node embed-certs-331495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node embed-certs-331495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node embed-certs-331495 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node embed-certs-331495 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m11s  kubelet          Node embed-certs-331495 status is now: NodeReady
	  Normal  RegisteredNode           9m9s   node-controller  Node embed-certs-331495 event: Registered Node embed-certs-331495 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075888] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.690420] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.721951] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150625] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.652296] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000067] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.641480] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.148003] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.180748] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.182513] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.317315] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +17.561225] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[Dec 5 20:53] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 5 20:57] systemd-fstab-generator[3522]: Ignoring "noauto" for root device
	[  +9.800395] systemd-fstab-generator[3850]: Ignoring "noauto" for root device
	[ +14.723507] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be] <==
	* {"level":"info","ts":"2023-12-05T20:57:25.213716Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:57:25.213905Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-05T20:57:25.21428Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2023-12-05T20:57:25.214395Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2023-12-05T20:57:25.21292Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:57:25.221898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=(728820823681708824)"}
	{"level":"info","ts":"2023-12-05T20:57:25.222236Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2023-12-05T20:57:25.858975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T20:57:25.859036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T20:57:25.859053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgPreVoteResp from a1d4aad7c74b318 at term 1"}
	{"level":"info","ts":"2023-12-05T20:57:25.85912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.85913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgVoteResp from a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.859139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became leader at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.859146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4aad7c74b318 elected leader a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.86089Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a1d4aad7c74b318","local-member-attributes":"{Name:embed-certs-331495 ClientURLs:[https://192.168.72.180:2379]}","request-path":"/0/members/a1d4aad7c74b318/attributes","cluster-id":"1bb44bc72743d07d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:57:25.861146Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.861235Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:57:25.862478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:57:25.862533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-05T20:57:25.861164Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:57:25.863421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:57:25.863571Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.863647Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.863672Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.864318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.180:2379"}
	
	* 
	* ==> kernel <==
	*  21:06:52 up 14 min,  0 users,  load average: 0.81, 0.61, 0.41
	Linux embed-certs-331495 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8] <==
	* W1205 21:02:28.718791       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:02:28.718850       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:02:28.718859       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:02:28.718962       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:02:28.719045       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:02:28.720043       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:03:27.572336       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:03:28.719502       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:03:28.719606       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:03:28.719633       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:03:28.720870       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:03:28.720959       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:03:28.720985       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:04:27.571558       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 21:05:27.572275       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:05:28.720380       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:05:28.720418       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:05:28.720430       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:05:28.721668       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:05:28.721838       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:05:28.721873       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:06:27.572172       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3] <==
	* I1205 21:01:17.346735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="146.162µs"
	E1205 21:01:43.865291       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:01:44.320872       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:02:13.871722       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:02:14.335394       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:02:43.878439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:02:44.345535       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:03:13.883680       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:03:14.353394       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:03:43.891674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:03:44.363202       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:03:54.347526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="356.034µs"
	I1205 21:04:07.345889       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="210.523µs"
	E1205 21:04:13.897816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:04:14.371932       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:04:43.903860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:04:44.381701       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:13.909330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:14.391158       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:43.916289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:44.402404       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:13.922693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:14.412999       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:43.931152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:44.426242       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815] <==
	* I1205 20:57:47.876295       1 server_others.go:69] "Using iptables proxy"
	I1205 20:57:47.936340       1 node.go:141] Successfully retrieved node IP: 192.168.72.180
	I1205 20:57:48.301981       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:57:48.302401       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:57:48.417262       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:57:48.420222       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:57:48.421161       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:57:48.421210       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:57:48.441326       1 config.go:188] "Starting service config controller"
	I1205 20:57:48.442563       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:57:48.442685       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:57:48.443547       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:57:48.448649       1 config.go:315] "Starting node config controller"
	I1205 20:57:48.448738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:57:48.545240       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:57:48.545335       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:57:48.548818       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d] <==
	* W1205 20:57:27.742726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:57:27.745264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:27.745346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:27.745273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:57:27.742761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:27.745427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:57:28.609768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:28.609874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:57:28.658417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:57:28.658482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 20:57:28.736502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:28.736583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:57:28.752802       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:57:28.752880       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:57:28.887957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:57:28.888293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 20:57:28.919446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:57:28.919540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:57:28.976009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:57:28.976301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 20:57:29.036560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:57:29.036693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 20:57:29.073587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:29.073707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1205 20:57:31.330568       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:52:15 UTC, ends at Tue 2023-12-05 21:06:52 UTC. --
	Dec 05 21:04:07 embed-certs-331495 kubelet[3857]: E1205 21:04:07.326954    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:04:22 embed-certs-331495 kubelet[3857]: E1205 21:04:22.325833    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:04:31 embed-certs-331495 kubelet[3857]: E1205 21:04:31.397407    3857 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:04:31 embed-certs-331495 kubelet[3857]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:04:31 embed-certs-331495 kubelet[3857]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:04:31 embed-certs-331495 kubelet[3857]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:04:35 embed-certs-331495 kubelet[3857]: E1205 21:04:35.328911    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:04:46 embed-certs-331495 kubelet[3857]: E1205 21:04:46.325891    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:04:57 embed-certs-331495 kubelet[3857]: E1205 21:04:57.327836    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:05:11 embed-certs-331495 kubelet[3857]: E1205 21:05:11.326625    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:05:24 embed-certs-331495 kubelet[3857]: E1205 21:05:24.327266    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:05:31 embed-certs-331495 kubelet[3857]: E1205 21:05:31.396571    3857 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:05:31 embed-certs-331495 kubelet[3857]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:05:31 embed-certs-331495 kubelet[3857]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:05:31 embed-certs-331495 kubelet[3857]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:05:37 embed-certs-331495 kubelet[3857]: E1205 21:05:37.328273    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:05:49 embed-certs-331495 kubelet[3857]: E1205 21:05:49.326825    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:06:02 embed-certs-331495 kubelet[3857]: E1205 21:06:02.326203    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:06:15 embed-certs-331495 kubelet[3857]: E1205 21:06:15.329031    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:06:27 embed-certs-331495 kubelet[3857]: E1205 21:06:27.327601    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:06:31 embed-certs-331495 kubelet[3857]: E1205 21:06:31.397768    3857 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:06:31 embed-certs-331495 kubelet[3857]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:06:31 embed-certs-331495 kubelet[3857]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:06:31 embed-certs-331495 kubelet[3857]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:06:42 embed-certs-331495 kubelet[3857]: E1205 21:06:42.326033    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	
	* 
	* ==> storage-provisioner [182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa] <==
	* I1205 20:57:48.591287       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:57:48.603958       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:57:48.604131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:57:48.616732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:57:48.617676       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9659a339-991f-4132-8dee-e7c6e5a0d76f", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-331495_d0eec26a-529a-474e-919d-f854b3788ba9 became leader
	I1205 20:57:48.617863       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-331495_d0eec26a-529a-474e-919d-f854b3788ba9!
	I1205 20:57:48.719055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-331495_d0eec26a-529a-474e-919d-f854b3788ba9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-331495 -n embed-certs-331495
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-331495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wv2t6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-331495 describe pod metrics-server-57f55c9bc5-wv2t6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-331495 describe pod metrics-server-57f55c9bc5-wv2t6: exit status 1 (78.458566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wv2t6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-331495 describe pod metrics-server-57f55c9bc5-wv2t6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 20:59:00.108650   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 21:00:16.959749   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 21:02:37.060640   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 21:02:46.651501   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 21:04:09.701930   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 21:05:16.960210   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061206 -n old-k8s-version-061206
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:07:33.537661742 +0000 UTC m=+5566.350121375
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-061206 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-061206 logs -n 25: (1.665983397s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-405510                                        | pause-405510                 | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-601680                              | stopped-upgrade-601680       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:49:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:49:16.268811   47365 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:49:16.269102   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269113   47365 out.go:309] Setting ErrFile to fd 2...
	I1205 20:49:16.269117   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269306   47365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:49:16.269873   47365 out.go:303] Setting JSON to false
	I1205 20:49:16.270847   47365 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5509,"bootTime":1701803847,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:49:16.270909   47365 start.go:138] virtualization: kvm guest
	I1205 20:49:16.273160   47365 out.go:177] * [default-k8s-diff-port-463614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:49:16.275265   47365 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:49:16.275288   47365 notify.go:220] Checking for updates...
	I1205 20:49:16.276797   47365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:49:16.278334   47365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:49:16.279902   47365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:49:16.281580   47365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:49:16.283168   47365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:49:16.285134   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:49:16.285533   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.285605   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.300209   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I1205 20:49:16.300585   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.301134   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.301159   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.301488   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.301644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.301873   47365 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:49:16.302164   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.302215   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.317130   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:49:16.317591   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.318064   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.318086   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.318475   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.318691   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.356580   47365 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:49:16.358350   47365 start.go:298] selected driver: kvm2
	I1205 20:49:16.358368   47365 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.358501   47365 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:49:16.359194   47365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.359276   47365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:49:16.374505   47365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:49:16.374939   47365 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:49:16.374999   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:49:16.375009   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:49:16.375022   47365 start_flags.go:323] config:
	{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-46361
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.375188   47365 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.377202   47365 out.go:177] * Starting control plane node default-k8s-diff-port-463614 in cluster default-k8s-diff-port-463614
	I1205 20:49:16.338499   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:19.410522   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:16.379191   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:49:16.379245   47365 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:49:16.379253   47365 cache.go:56] Caching tarball of preloaded images
	I1205 20:49:16.379352   47365 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:49:16.379364   47365 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:49:16.379500   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:49:16.379715   47365 start.go:365] acquiring machines lock for default-k8s-diff-port-463614: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:49:25.490576   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:28.562621   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:34.642596   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:37.714630   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:43.794573   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:46.866618   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:52.946521   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:56.018552   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:02.098566   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:05.170641   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:11.250570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:14.322507   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:20.402570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:23.474581   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:29.554568   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:32.626541   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:38.706589   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:41.778594   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:47.858626   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:50.930560   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:57.010496   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:00.082587   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:03.086325   46700 start.go:369] acquired machines lock for "old-k8s-version-061206" in 4m14.42699626s
	I1205 20:51:03.086377   46700 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:03.086392   46700 fix.go:54] fixHost starting: 
	I1205 20:51:03.086799   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:03.086835   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:03.101342   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1205 20:51:03.101867   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:03.102378   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:51:03.102403   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:03.102792   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:03.103003   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:03.103208   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:51:03.104894   46700 fix.go:102] recreateIfNeeded on old-k8s-version-061206: state=Stopped err=<nil>
	I1205 20:51:03.104914   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	W1205 20:51:03.105115   46700 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:03.106835   46700 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-061206" ...
	I1205 20:51:03.108621   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Start
	I1205 20:51:03.108840   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring networks are active...
	I1205 20:51:03.109627   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network default is active
	I1205 20:51:03.110007   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network mk-old-k8s-version-061206 is active
	I1205 20:51:03.110401   46700 main.go:141] libmachine: (old-k8s-version-061206) Getting domain xml...
	I1205 20:51:03.111358   46700 main.go:141] libmachine: (old-k8s-version-061206) Creating domain...
	I1205 20:51:03.084237   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:03.084288   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:51:03.086163   46374 machine.go:91] provisioned docker machine in 4m37.408875031s
	I1205 20:51:03.086199   46374 fix.go:56] fixHost completed within 4m37.430079633s
	I1205 20:51:03.086204   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 4m37.430101514s
	W1205 20:51:03.086231   46374 start.go:694] error starting host: provision: host is not running
	W1205 20:51:03.086344   46374 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:51:03.086356   46374 start.go:709] Will try again in 5 seconds ...
	I1205 20:51:04.367947   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting to get IP...
	I1205 20:51:04.368825   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.369277   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.369387   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.369246   47662 retry.go:31] will retry after 251.730796ms: waiting for machine to come up
	I1205 20:51:04.622984   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.623402   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.623431   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.623354   47662 retry.go:31] will retry after 383.862516ms: waiting for machine to come up
	I1205 20:51:05.008944   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.009308   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.009336   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.009237   47662 retry.go:31] will retry after 412.348365ms: waiting for machine to come up
	I1205 20:51:05.422846   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.423235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.423253   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.423198   47662 retry.go:31] will retry after 568.45875ms: waiting for machine to come up
	I1205 20:51:05.992882   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.993236   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.993264   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.993182   47662 retry.go:31] will retry after 494.410091ms: waiting for machine to come up
	I1205 20:51:06.488852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:06.489210   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:06.489235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:06.489151   47662 retry.go:31] will retry after 640.351521ms: waiting for machine to come up
	I1205 20:51:07.130869   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:07.131329   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:07.131355   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:07.131273   47662 retry.go:31] will retry after 1.164209589s: waiting for machine to come up
	I1205 20:51:08.296903   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:08.297333   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:08.297365   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:08.297280   47662 retry.go:31] will retry after 1.479760715s: waiting for machine to come up
	I1205 20:51:08.087457   46374 start.go:365] acquiring machines lock for embed-certs-331495: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:09.778949   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:09.779414   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:09.779435   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:09.779379   47662 retry.go:31] will retry after 1.577524888s: waiting for machine to come up
	I1205 20:51:11.359094   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:11.359468   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:11.359499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:11.359405   47662 retry.go:31] will retry after 1.742003001s: waiting for machine to come up
	I1205 20:51:13.103927   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:13.104416   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:13.104446   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:13.104365   47662 retry.go:31] will retry after 2.671355884s: waiting for machine to come up
	I1205 20:51:15.777050   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:15.777542   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:15.777573   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:15.777491   47662 retry.go:31] will retry after 2.435682478s: waiting for machine to come up
	I1205 20:51:18.214485   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:18.214943   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:18.214965   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:18.214920   47662 retry.go:31] will retry after 2.827460605s: waiting for machine to come up
	I1205 20:51:22.191314   46866 start.go:369] acquired machines lock for "no-preload-143651" in 4m16.377152417s
	I1205 20:51:22.191373   46866 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:22.191380   46866 fix.go:54] fixHost starting: 
	I1205 20:51:22.191764   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:22.191801   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:22.208492   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I1205 20:51:22.208882   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:22.209423   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:51:22.209448   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:22.209839   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:22.210041   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:22.210202   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:51:22.211737   46866 fix.go:102] recreateIfNeeded on no-preload-143651: state=Stopped err=<nil>
	I1205 20:51:22.211762   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	W1205 20:51:22.211960   46866 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:22.214319   46866 out.go:177] * Restarting existing kvm2 VM for "no-preload-143651" ...
	I1205 20:51:21.044392   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044931   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has current primary IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044953   46700 main.go:141] libmachine: (old-k8s-version-061206) Found IP for machine: 192.168.50.116
	I1205 20:51:21.044964   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserving static IP address...
	I1205 20:51:21.045337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.045357   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserved static IP address: 192.168.50.116
	I1205 20:51:21.045371   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | skip adding static IP to network mk-old-k8s-version-061206 - found existing host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"}
	I1205 20:51:21.045381   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting for SSH to be available...
	I1205 20:51:21.045398   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Getting to WaitForSSH function...
	I1205 20:51:21.047343   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047678   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.047719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047758   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH client type: external
	I1205 20:51:21.047789   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa (-rw-------)
	I1205 20:51:21.047817   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:21.047832   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | About to run SSH command:
	I1205 20:51:21.047841   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | exit 0
	I1205 20:51:21.134741   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:21.135100   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetConfigRaw
	I1205 20:51:21.135770   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.138325   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138656   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.138689   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138908   46700 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:51:21.139128   46700 machine.go:88] provisioning docker machine ...
	I1205 20:51:21.139147   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.139351   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139516   46700 buildroot.go:166] provisioning hostname "old-k8s-version-061206"
	I1205 20:51:21.139534   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139714   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.141792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142136   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.142163   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142294   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.142471   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142609   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142741   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.142868   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.143244   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.143264   46700 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061206 && echo "old-k8s-version-061206" | sudo tee /etc/hostname
	I1205 20:51:21.267170   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061206
	
	I1205 20:51:21.267193   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.270042   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270524   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.270556   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270749   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.270945   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271115   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.271407   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.271735   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.271752   46700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061206/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:21.391935   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:21.391959   46700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:21.391983   46700 buildroot.go:174] setting up certificates
	I1205 20:51:21.391994   46700 provision.go:83] configureAuth start
	I1205 20:51:21.392002   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.392264   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.395020   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.395375   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395517   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.397499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397760   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.397792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397937   46700 provision.go:138] copyHostCerts
	I1205 20:51:21.397994   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:21.398007   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:21.398090   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:21.398222   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:21.398234   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:21.398293   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:21.398383   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:21.398394   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:21.398432   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:21.398499   46700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061206 san=[192.168.50.116 192.168.50.116 localhost 127.0.0.1 minikube old-k8s-version-061206]
	I1205 20:51:21.465637   46700 provision.go:172] copyRemoteCerts
	I1205 20:51:21.465701   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:21.465737   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.468386   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468688   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.468719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468896   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.469092   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.469232   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.469349   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:21.555915   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:21.578545   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:51:21.603058   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:21.624769   46700 provision.go:86] duration metric: configureAuth took 232.761874ms
	I1205 20:51:21.624798   46700 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:21.624972   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:51:21.625065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.627589   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.627953   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.627991   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.628085   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.628300   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628477   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628643   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.628867   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.629237   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.629262   46700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:21.945366   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:21.945398   46700 machine.go:91] provisioned docker machine in 806.257704ms
	I1205 20:51:21.945410   46700 start.go:300] post-start starting for "old-k8s-version-061206" (driver="kvm2")
	I1205 20:51:21.945423   46700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:21.945442   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.945803   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:21.945833   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.948699   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949083   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.949116   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949247   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.949455   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.949642   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.949780   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.036694   46700 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:22.040857   46700 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:22.040887   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:22.040961   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:22.041067   46700 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:22.041167   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:22.050610   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:22.072598   46700 start.go:303] post-start completed in 127.17514ms
	I1205 20:51:22.072621   46700 fix.go:56] fixHost completed within 18.986227859s
	I1205 20:51:22.072650   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.075382   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.075779   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.075809   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.076014   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.076218   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076390   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076548   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.076677   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:22.076979   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:22.076989   46700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:22.191127   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809482.140720971
	
	I1205 20:51:22.191150   46700 fix.go:206] guest clock: 1701809482.140720971
	I1205 20:51:22.191160   46700 fix.go:219] Guest: 2023-12-05 20:51:22.140720971 +0000 UTC Remote: 2023-12-05 20:51:22.072625275 +0000 UTC m=+273.566123117 (delta=68.095696ms)
	I1205 20:51:22.191206   46700 fix.go:190] guest clock delta is within tolerance: 68.095696ms
	I1205 20:51:22.191211   46700 start.go:83] releasing machines lock for "old-k8s-version-061206", held for 19.104851926s
	I1205 20:51:22.191239   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.191530   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:22.194285   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194676   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.194721   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194832   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195352   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195535   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195614   46700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:22.195660   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.195729   46700 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:22.195759   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.198085   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198438   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198493   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198522   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198619   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.198813   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.198893   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198922   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198980   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.199139   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.199172   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.199274   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199426   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.284598   46700 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:22.304917   46700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:22.454449   46700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:22.461344   46700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:22.461409   46700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:22.483106   46700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:22.483130   46700 start.go:475] detecting cgroup driver to use...
	I1205 20:51:22.483202   46700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:22.498157   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:22.510661   46700 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:22.510712   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:22.525004   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:22.538499   46700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:22.652874   46700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:22.787215   46700 docker.go:219] disabling docker service ...
	I1205 20:51:22.787272   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:22.800315   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:22.812031   46700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:22.926202   46700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:23.057043   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:23.072205   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:23.092858   46700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:51:23.092916   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.103613   46700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:23.103680   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.113992   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.124132   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.134007   46700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:23.144404   46700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:23.153679   46700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:23.153735   46700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:23.167935   46700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:23.178944   46700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:23.294314   46700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:23.469887   46700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:23.469957   46700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:23.475308   46700 start.go:543] Will wait 60s for crictl version
	I1205 20:51:23.475384   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:23.479436   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:23.520140   46700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:23.520223   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.572184   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.619296   46700 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1205 20:51:22.215866   46866 main.go:141] libmachine: (no-preload-143651) Calling .Start
	I1205 20:51:22.216026   46866 main.go:141] libmachine: (no-preload-143651) Ensuring networks are active...
	I1205 20:51:22.216719   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network default is active
	I1205 20:51:22.217060   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network mk-no-preload-143651 is active
	I1205 20:51:22.217553   46866 main.go:141] libmachine: (no-preload-143651) Getting domain xml...
	I1205 20:51:22.218160   46866 main.go:141] libmachine: (no-preload-143651) Creating domain...
	I1205 20:51:23.560327   46866 main.go:141] libmachine: (no-preload-143651) Waiting to get IP...
	I1205 20:51:23.561191   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.561601   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.561675   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.561566   47785 retry.go:31] will retry after 269.644015ms: waiting for machine to come up
	I1205 20:51:23.833089   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.833656   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.833695   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.833612   47785 retry.go:31] will retry after 363.018928ms: waiting for machine to come up
	I1205 20:51:24.198250   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.198767   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.198797   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.198717   47785 retry.go:31] will retry after 464.135158ms: waiting for machine to come up
	I1205 20:51:24.664518   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.664945   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.664970   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.664902   47785 retry.go:31] will retry after 383.704385ms: waiting for machine to come up
	I1205 20:51:25.050654   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.051112   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.051142   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.051078   47785 retry.go:31] will retry after 620.614799ms: waiting for machine to come up
	I1205 20:51:25.672997   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.673452   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.673485   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.673394   47785 retry.go:31] will retry after 594.447783ms: waiting for machine to come up
	I1205 20:51:23.620743   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:23.623372   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623672   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:23.623702   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623934   46700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:23.628382   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:23.642698   46700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 20:51:23.642770   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:23.686679   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:23.686776   46700 ssh_runner.go:195] Run: which lz4
	I1205 20:51:23.690994   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:51:23.695445   46700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:51:23.695480   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1205 20:51:25.519961   46700 crio.go:444] Took 1.828999 seconds to copy over tarball
	I1205 20:51:25.520052   46700 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:51:28.545261   46700 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025151809s)
	I1205 20:51:28.545291   46700 crio.go:451] Took 3.025302 seconds to extract the tarball
	I1205 20:51:28.545303   46700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:51:26.269269   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:26.269771   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:26.269815   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:26.269741   47785 retry.go:31] will retry after 872.968768ms: waiting for machine to come up
	I1205 20:51:27.144028   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:27.144505   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:27.144538   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:27.144467   47785 retry.go:31] will retry after 1.067988446s: waiting for machine to come up
	I1205 20:51:28.213709   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:28.214161   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:28.214184   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:28.214111   47785 retry.go:31] will retry after 1.483033238s: waiting for machine to come up
	I1205 20:51:29.699402   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:29.699928   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:29.699973   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:29.699861   47785 retry.go:31] will retry after 1.985034944s: waiting for machine to come up
	I1205 20:51:28.586059   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:28.631610   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:28.631643   46700 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:28.631749   46700 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.631797   46700 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.631754   46700 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.631937   46700 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.632007   46700 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:51:28.631930   46700 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.632029   46700 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.631760   46700 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633385   46700 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633397   46700 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:51:28.633416   46700 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.633494   46700 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.633496   46700 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.633512   46700 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.633518   46700 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.633497   46700 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.789873   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.811118   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.811610   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.818440   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.818470   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1205 20:51:28.820473   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.849060   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.855915   46700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1205 20:51:28.855966   46700 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.856023   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953211   46700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1205 20:51:28.953261   46700 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.953289   46700 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1205 20:51:28.953315   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953325   46700 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.953363   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.968680   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.992735   46700 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1205 20:51:28.992781   46700 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1205 20:51:28.992825   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992847   46700 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1205 20:51:28.992878   46700 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.992907   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992917   46700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1205 20:51:28.992830   46700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1205 20:51:28.992948   46700 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.992980   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.992994   46700 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.993009   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.993029   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992944   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.993064   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:29.193946   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:29.194040   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1205 20:51:29.194095   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1205 20:51:29.194188   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1205 20:51:29.194217   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1205 20:51:29.194257   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:29.194279   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1205 20:51:29.299767   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:51:29.299772   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1205 20:51:29.299836   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1205 20:51:29.299855   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1205 20:51:29.299870   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.304934   46700 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1205 20:51:29.304952   46700 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.305004   46700 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1205 20:51:31.467263   46700 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.162226207s)
	I1205 20:51:31.467295   46700 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1205 20:51:31.467342   46700 cache_images.go:92] LoadImages completed in 2.835682781s
	W1205 20:51:31.467425   46700 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1205 20:51:31.467515   46700 ssh_runner.go:195] Run: crio config
	I1205 20:51:31.527943   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:31.527968   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:31.527989   46700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:51:31.528016   46700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061206 NodeName:old-k8s-version-061206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:51:31.528162   46700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-061206"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-061206
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.116:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:51:31.528265   46700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-061206 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:51:31.528332   46700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1205 20:51:31.538013   46700 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:51:31.538090   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:51:31.547209   46700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:51:31.565720   46700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:51:31.582290   46700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1205 20:51:31.599081   46700 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I1205 20:51:31.603007   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:31.615348   46700 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206 for IP: 192.168.50.116
	I1205 20:51:31.615385   46700 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:31.615582   46700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:51:31.615657   46700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:51:31.615757   46700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.key
	I1205 20:51:31.615846   46700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key.ae4cb88a
	I1205 20:51:31.615902   46700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key
	I1205 20:51:31.616079   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:51:31.616150   46700 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:51:31.616172   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:51:31.616216   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:51:31.616261   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:51:31.616302   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:51:31.616375   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:31.617289   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:51:31.645485   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:51:31.675015   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:51:31.699520   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:51:31.727871   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:51:31.751623   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:51:31.776679   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:51:31.799577   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:51:31.827218   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:51:31.849104   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:51:31.870931   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:51:31.894940   46700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:51:31.912233   46700 ssh_runner.go:195] Run: openssl version
	I1205 20:51:31.918141   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:51:31.928422   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932915   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932985   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.938327   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:51:31.948580   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:51:31.958710   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963091   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963155   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.968667   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:51:31.981987   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:51:31.995793   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001622   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001709   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.008883   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:51:32.021378   46700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:51:32.025902   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:51:32.031917   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:51:32.037649   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:51:32.043121   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:51:32.048806   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:51:32.054266   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:51:32.060014   46700 kubeadm.go:404] StartCluster: {Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:51:32.060131   46700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:51:32.060186   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:32.101244   46700 cri.go:89] found id: ""
	I1205 20:51:32.101317   46700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:51:32.111900   46700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:51:32.111925   46700 kubeadm.go:636] restartCluster start
	I1205 20:51:32.111989   46700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:51:32.121046   46700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.122654   46700 kubeconfig.go:92] found "old-k8s-version-061206" server: "https://192.168.50.116:8443"
	I1205 20:51:32.126231   46700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:51:32.135341   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.135404   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.147308   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.147325   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.147367   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.158453   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.659254   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.659357   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.672490   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:33.159599   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.159693   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.171948   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:31.688072   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:31.688591   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:31.688627   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:31.688516   47785 retry.go:31] will retry after 1.83172898s: waiting for machine to come up
	I1205 20:51:33.521647   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:33.522137   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:33.522167   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:33.522083   47785 retry.go:31] will retry after 3.41334501s: waiting for machine to come up
	I1205 20:51:33.659273   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.659359   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.675427   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.158981   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.159075   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.173025   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.659439   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.659547   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.672184   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.159408   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.159472   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.173149   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.659490   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.659626   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.673261   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.159480   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.159569   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.172185   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.659417   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.659528   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.675853   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.159404   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.159495   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.172824   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.659361   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.659456   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.671599   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:38.158754   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.158834   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.171170   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.939441   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:36.939880   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:36.939905   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:36.939843   47785 retry.go:31] will retry after 3.715659301s: waiting for machine to come up
	I1205 20:51:40.659432   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659901   46866 main.go:141] libmachine: (no-preload-143651) Found IP for machine: 192.168.61.162
	I1205 20:51:40.659937   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has current primary IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659973   46866 main.go:141] libmachine: (no-preload-143651) Reserving static IP address...
	I1205 20:51:40.660324   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.660352   46866 main.go:141] libmachine: (no-preload-143651) Reserved static IP address: 192.168.61.162
	I1205 20:51:40.660372   46866 main.go:141] libmachine: (no-preload-143651) DBG | skip adding static IP to network mk-no-preload-143651 - found existing host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"}
	I1205 20:51:40.660391   46866 main.go:141] libmachine: (no-preload-143651) DBG | Getting to WaitForSSH function...
	I1205 20:51:40.660407   46866 main.go:141] libmachine: (no-preload-143651) Waiting for SSH to be available...
	I1205 20:51:40.662619   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663014   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.663042   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663226   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH client type: external
	I1205 20:51:40.663257   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa (-rw-------)
	I1205 20:51:40.663293   46866 main.go:141] libmachine: (no-preload-143651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:40.663312   46866 main.go:141] libmachine: (no-preload-143651) DBG | About to run SSH command:
	I1205 20:51:40.663328   46866 main.go:141] libmachine: (no-preload-143651) DBG | exit 0
	I1205 20:51:41.891099   47365 start.go:369] acquired machines lock for "default-k8s-diff-port-463614" in 2m25.511348838s
	I1205 20:51:41.891167   47365 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:41.891179   47365 fix.go:54] fixHost starting: 
	I1205 20:51:41.891625   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:41.891666   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:41.910556   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1205 20:51:41.910956   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:41.911447   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:51:41.911474   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:41.911792   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:41.912020   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:51:41.912168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:51:41.913796   47365 fix.go:102] recreateIfNeeded on default-k8s-diff-port-463614: state=Stopped err=<nil>
	I1205 20:51:41.913824   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	W1205 20:51:41.914032   47365 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:41.916597   47365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-463614" ...
	I1205 20:51:40.754683   46866 main.go:141] libmachine: (no-preload-143651) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:40.755055   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetConfigRaw
	I1205 20:51:40.755663   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:40.758165   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758502   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.758534   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758722   46866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:51:40.758916   46866 machine.go:88] provisioning docker machine ...
	I1205 20:51:40.758933   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:40.759160   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759358   46866 buildroot.go:166] provisioning hostname "no-preload-143651"
	I1205 20:51:40.759384   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759555   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.762125   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762513   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.762546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762688   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.762894   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763070   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763211   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.763392   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.763747   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.763761   46866 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143651 && echo "no-preload-143651" | sudo tee /etc/hostname
	I1205 20:51:40.895095   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143651
	
	I1205 20:51:40.895123   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.897864   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898199   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.898236   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898419   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.898629   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898814   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898972   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.899147   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.899454   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.899472   46866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143651/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:41.027721   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:41.027758   46866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:41.027802   46866 buildroot.go:174] setting up certificates
	I1205 20:51:41.027813   46866 provision.go:83] configureAuth start
	I1205 20:51:41.027827   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:41.028120   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.031205   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031561   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.031592   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031715   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.034163   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034531   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.034563   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034697   46866 provision.go:138] copyHostCerts
	I1205 20:51:41.034750   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:41.034767   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:41.034826   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:41.034918   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:41.034925   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:41.034947   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:41.035018   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:41.035029   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:41.035056   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:41.035129   46866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.no-preload-143651 san=[192.168.61.162 192.168.61.162 localhost 127.0.0.1 minikube no-preload-143651]
	I1205 20:51:41.152743   46866 provision.go:172] copyRemoteCerts
	I1205 20:51:41.152808   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:41.152836   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.155830   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156153   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.156181   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156380   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.156587   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.156769   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.156914   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.247182   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 20:51:41.271756   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:41.296485   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:41.317870   46866 provision.go:86] duration metric: configureAuth took 290.041804ms
	I1205 20:51:41.317900   46866 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:41.318059   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:51:41.318130   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.320631   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.320907   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.320935   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.321099   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.321310   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321436   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321558   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.321671   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.321981   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.321998   46866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:41.637500   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:41.637536   46866 machine.go:91] provisioned docker machine in 878.607379ms
	I1205 20:51:41.637551   46866 start.go:300] post-start starting for "no-preload-143651" (driver="kvm2")
	I1205 20:51:41.637565   46866 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:41.637586   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.637928   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:41.637959   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.640546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.640941   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.640969   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.641158   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.641348   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.641521   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.641701   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.733255   46866 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:41.737558   46866 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:41.737582   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:41.737656   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:41.737747   46866 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:41.737867   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:41.747400   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:41.769318   46866 start.go:303] post-start completed in 131.753103ms
	I1205 20:51:41.769341   46866 fix.go:56] fixHost completed within 19.577961747s
	I1205 20:51:41.769360   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.772098   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772433   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.772469   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772614   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.772830   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773000   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773141   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.773329   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.773689   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.773701   46866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:41.890932   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809501.865042950
	
	I1205 20:51:41.890965   46866 fix.go:206] guest clock: 1701809501.865042950
	I1205 20:51:41.890977   46866 fix.go:219] Guest: 2023-12-05 20:51:41.86504295 +0000 UTC Remote: 2023-12-05 20:51:41.769344785 +0000 UTC m=+276.111345943 (delta=95.698165ms)
	I1205 20:51:41.891000   46866 fix.go:190] guest clock delta is within tolerance: 95.698165ms
	I1205 20:51:41.891005   46866 start.go:83] releasing machines lock for "no-preload-143651", held for 19.699651094s
	I1205 20:51:41.891037   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.891349   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.893760   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894151   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.894188   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894393   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.894953   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895147   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895233   46866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:41.895275   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.895379   46866 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:41.895409   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.897961   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898107   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898353   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898396   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898610   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898663   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898693   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898781   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.898835   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.899138   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.899149   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.899296   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.987662   46866 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:42.008983   46866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:42.150028   46866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:42.156643   46866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:42.156719   46866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:42.175508   46866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:42.175534   46866 start.go:475] detecting cgroup driver to use...
	I1205 20:51:42.175620   46866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:42.189808   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:42.202280   46866 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:42.202342   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:42.220906   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:42.238796   46866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:42.364162   46866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:42.493990   46866 docker.go:219] disabling docker service ...
	I1205 20:51:42.494066   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:42.507419   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:42.519769   46866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:42.639608   46866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:42.764015   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:42.776984   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:42.797245   46866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:51:42.797307   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.807067   46866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:42.807150   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.816699   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.825896   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.835144   46866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:42.844910   46866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:42.853054   46866 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:42.853127   46866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:42.865162   46866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:42.874929   46866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:42.989397   46866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:43.173537   46866 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:43.173613   46866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:43.179392   46866 start.go:543] Will wait 60s for crictl version
	I1205 20:51:43.179449   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.183693   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:43.233790   46866 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:43.233862   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.291711   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.343431   46866 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1205 20:51:38.658807   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.658875   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.672580   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.159258   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.159363   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.172800   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.659451   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.659544   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.673718   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.159346   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.159436   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.172524   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.659093   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.659170   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.671848   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.159453   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.159534   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.171845   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.659456   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.659520   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.671136   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:42.136008   46700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:51:42.136039   46700 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:51:42.136049   46700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:51:42.136130   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:42.183279   46700 cri.go:89] found id: ""
	I1205 20:51:42.183375   46700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:51:42.202550   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:51:42.213978   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:51:42.214041   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223907   46700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223932   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:42.349280   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.257422   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.483371   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.345205   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:43.348398   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348738   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:43.348769   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348965   46866 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:43.354536   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:43.368512   46866 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 20:51:43.368550   46866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:43.411924   46866 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1205 20:51:43.411956   46866 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:43.412050   46866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.412030   46866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.412084   46866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.412097   46866 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1205 20:51:43.412134   46866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.412072   46866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.412021   46866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.412056   46866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413334   46866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.413403   46866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413481   46866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.413539   46866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.413554   46866 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1205 20:51:43.413337   46866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.413624   46866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.413405   46866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.563942   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.565063   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.567071   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.572782   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.577279   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.579820   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1205 20:51:43.591043   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.735723   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.735988   46866 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1205 20:51:43.736032   46866 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.736073   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.791375   46866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1205 20:51:43.791424   46866 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.791473   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.810236   46866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1205 20:51:43.810290   46866 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.810339   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841046   46866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1205 20:51:43.841255   46866 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.841347   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841121   46866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1205 20:51:43.841565   46866 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.841635   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866289   46866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1205 20:51:43.866344   46866 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.866368   46866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:51:43.866390   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866417   46866 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.866465   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866469   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.866597   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.866685   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.866780   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.866853   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.994581   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994691   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994757   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1205 20:51:43.994711   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.994792   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.994849   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:44.000411   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.000501   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.008960   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1205 20:51:44.009001   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:44.073217   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073238   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073275   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1205 20:51:44.073282   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073304   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073376   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:51:44.073397   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073439   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073444   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:44.073471   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1205 20:51:44.073504   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1205 20:51:41.918223   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Start
	I1205 20:51:41.918414   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring networks are active...
	I1205 20:51:41.919085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network default is active
	I1205 20:51:41.919401   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network mk-default-k8s-diff-port-463614 is active
	I1205 20:51:41.919733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Getting domain xml...
	I1205 20:51:41.920368   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Creating domain...
	I1205 20:51:43.304717   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting to get IP...
	I1205 20:51:43.305837   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.306202   47900 retry.go:31] will retry after 208.55347ms: waiting for machine to come up
	I1205 20:51:43.516782   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517269   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517297   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.517232   47900 retry.go:31] will retry after 370.217439ms: waiting for machine to come up
	I1205 20:51:43.889085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889580   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889615   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.889531   47900 retry.go:31] will retry after 395.420735ms: waiting for machine to come up
	I1205 20:51:44.286007   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286563   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.286481   47900 retry.go:31] will retry after 437.496548ms: waiting for machine to come up
	I1205 20:51:44.726145   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726803   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726850   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.726748   47900 retry.go:31] will retry after 628.791518ms: waiting for machine to come up
	I1205 20:51:45.357823   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358285   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:45.358232   47900 retry.go:31] will retry after 661.164562ms: waiting for machine to come up
	I1205 20:51:46.021711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022151   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022177   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:46.022120   47900 retry.go:31] will retry after 1.093521736s: waiting for machine to come up
	I1205 20:51:43.607841   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.765000   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:51:43.765097   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:43.776916   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.306400   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.805894   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.305832   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.332834   46700 api_server.go:72] duration metric: took 1.567832932s to wait for apiserver process to appear ...
	I1205 20:51:45.332867   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:51:45.332884   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:46.537183   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.463870183s)
	I1205 20:51:46.537256   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1205 20:51:46.537311   46866 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:46.537336   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.46384231s)
	I1205 20:51:46.537260   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.463842778s)
	I1205 20:51:46.537373   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:51:46.537394   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1205 20:51:46.537411   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:50.326248   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.788789868s)
	I1205 20:51:50.326299   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1205 20:51:50.326337   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:50.326419   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:47.117386   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117831   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:47.117800   47900 retry.go:31] will retry after 1.255113027s: waiting for machine to come up
	I1205 20:51:48.375199   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375692   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:48.375655   47900 retry.go:31] will retry after 1.65255216s: waiting for machine to come up
	I1205 20:51:50.029505   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029904   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029933   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:50.029860   47900 retry.go:31] will retry after 2.072960988s: waiting for machine to come up
	I1205 20:51:50.334417   46700 api_server.go:269] stopped: https://192.168.50.116:8443/healthz: Get "https://192.168.50.116:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:51:50.334459   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.286979   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:51:52.287013   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:51:52.787498   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.871766   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:52.871803   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.287974   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.301921   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:53.301962   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.787781   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.799426   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:51:53.809064   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:51:53.809101   46700 api_server.go:131] duration metric: took 8.476226007s to wait for apiserver health ...
	I1205 20:51:53.809112   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:53.809120   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:53.811188   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:51:53.496825   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.170377466s)
	I1205 20:51:53.496856   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1205 20:51:53.496877   46866 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:53.496925   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:55.657835   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.160865472s)
	I1205 20:51:55.657869   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1205 20:51:55.657898   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:55.657955   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:52.104758   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105274   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105301   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:52.105232   47900 retry.go:31] will retry after 2.172151449s: waiting for machine to come up
	I1205 20:51:54.279576   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280091   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:54.280054   47900 retry.go:31] will retry after 3.042324499s: waiting for machine to come up
	I1205 20:51:53.812841   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:51:53.835912   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:51:53.920892   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:51:53.943982   46700 system_pods.go:59] 7 kube-system pods found
	I1205 20:51:53.944026   46700 system_pods.go:61] "coredns-5644d7b6d9-kqhgk" [473e53e3-a0bd-4dcb-88c1-d61e9cc3e686] Running
	I1205 20:51:53.944034   46700 system_pods.go:61] "etcd-old-k8s-version-061206" [a2a6a459-41a3-49e3-b32e-a091317390ea] Running
	I1205 20:51:53.944041   46700 system_pods.go:61] "kube-apiserver-old-k8s-version-061206" [9cf24995-fccb-47e4-8d4a-870198b7c82f] Running
	I1205 20:51:53.944054   46700 system_pods.go:61] "kube-controller-manager-old-k8s-version-061206" [225a4a8b-2b6e-46f4-8bd9-9a375b05c23c] Pending
	I1205 20:51:53.944061   46700 system_pods.go:61] "kube-proxy-r5n6g" [5db8876d-ecff-40b3-a61d-aeaf7870166c] Running
	I1205 20:51:53.944068   46700 system_pods.go:61] "kube-scheduler-old-k8s-version-061206" [de56d925-45b3-4c36-b2c2-c90938793aa2] Running
	I1205 20:51:53.944075   46700 system_pods.go:61] "storage-provisioner" [d5d57d93-f94b-4a3e-8c65-25cd4d71b9d5] Running
	I1205 20:51:53.944083   46700 system_pods.go:74] duration metric: took 23.165628ms to wait for pod list to return data ...
	I1205 20:51:53.944093   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:51:53.956907   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:51:53.956949   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:51:53.956964   46700 node_conditions.go:105] duration metric: took 12.864098ms to run NodePressure ...
	I1205 20:51:53.956986   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:54.482145   46700 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:51:54.492629   46700 retry.go:31] will retry after 326.419845ms: kubelet not initialised
	I1205 20:51:54.826701   46700 retry.go:31] will retry after 396.475289ms: kubelet not initialised
	I1205 20:51:55.228971   46700 retry.go:31] will retry after 752.153604ms: kubelet not initialised
	I1205 20:51:55.987713   46700 retry.go:31] will retry after 881.822561ms: kubelet not initialised
	I1205 20:51:56.877407   46700 retry.go:31] will retry after 824.757816ms: kubelet not initialised
	I1205 20:51:57.707927   46700 retry.go:31] will retry after 2.392241385s: kubelet not initialised
	I1205 20:51:58.643374   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.985387711s)
	I1205 20:51:58.643408   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1205 20:51:58.643434   46866 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:58.643500   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:59.407245   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:51:59.407282   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:59.407333   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:57.324016   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324534   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324565   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:57.324482   47900 retry.go:31] will retry after 3.449667479s: waiting for machine to come up
	I1205 20:52:00.776644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777141   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Found IP for machine: 192.168.39.27
	I1205 20:52:00.777175   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777186   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserving static IP address...
	I1205 20:52:00.777825   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserved static IP address: 192.168.39.27
	I1205 20:52:00.777878   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.777892   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for SSH to be available...
	I1205 20:52:00.777918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | skip adding static IP to network mk-default-k8s-diff-port-463614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"}
	I1205 20:52:00.777929   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Getting to WaitForSSH function...
	I1205 20:52:00.780317   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.780729   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH client type: external
	I1205 20:52:00.780909   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa (-rw-------)
	I1205 20:52:00.780940   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:00.780959   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | About to run SSH command:
	I1205 20:52:00.780980   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | exit 0
	I1205 20:52:00.922857   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:00.923204   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetConfigRaw
	I1205 20:52:00.923973   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:00.927405   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.927885   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.927918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.928217   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:52:00.928469   47365 machine.go:88] provisioning docker machine ...
	I1205 20:52:00.928497   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:00.928735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.928912   47365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-463614"
	I1205 20:52:00.928938   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.929092   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:00.931664   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932096   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.932130   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:00.932496   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932672   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932822   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:00.932990   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:00.933401   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:00.933420   47365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-463614 && echo "default-k8s-diff-port-463614" | sudo tee /etc/hostname
	I1205 20:52:01.078295   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-463614
	
	I1205 20:52:01.078332   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.081604   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082051   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.082079   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082240   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.082492   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.083034   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.083506   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.083535   47365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-463614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-463614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-463614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:01.215856   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:01.215884   47365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:01.215912   47365 buildroot.go:174] setting up certificates
	I1205 20:52:01.215927   47365 provision.go:83] configureAuth start
	I1205 20:52:01.215947   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:01.216246   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:01.219169   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219465   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.219503   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.221768   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222137   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.222171   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222410   47365 provision.go:138] copyHostCerts
	I1205 20:52:01.222493   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:01.222508   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:01.222568   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:01.222686   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:01.222717   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:01.222757   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:01.222825   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:01.222832   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:01.222856   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:01.222921   47365 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-463614 san=[192.168.39.27 192.168.39.27 localhost 127.0.0.1 minikube default-k8s-diff-port-463614]
	I1205 20:52:02.247282   46374 start.go:369] acquired machines lock for "embed-certs-331495" in 54.15977635s
	I1205 20:52:02.247348   46374 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:52:02.247360   46374 fix.go:54] fixHost starting: 
	I1205 20:52:02.247794   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:02.247830   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:02.265529   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I1205 20:52:02.265970   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:02.266457   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:52:02.266484   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:02.266825   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:02.267016   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:02.267185   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:52:02.268838   46374 fix.go:102] recreateIfNeeded on embed-certs-331495: state=Stopped err=<nil>
	I1205 20:52:02.268859   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	W1205 20:52:02.269010   46374 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:52:02.270658   46374 out.go:177] * Restarting existing kvm2 VM for "embed-certs-331495" ...
	I1205 20:52:00.114757   46700 retry.go:31] will retry after 2.136164682s: kubelet not initialised
	I1205 20:52:02.258242   46700 retry.go:31] will retry after 4.673214987s: kubelet not initialised
	I1205 20:52:01.474739   47365 provision.go:172] copyRemoteCerts
	I1205 20:52:01.474804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:01.474834   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.477249   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477632   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.477659   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477908   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.478119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.478313   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.478463   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:01.569617   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:01.594120   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 20:52:01.618066   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:52:01.643143   47365 provision.go:86] duration metric: configureAuth took 427.201784ms
	I1205 20:52:01.643169   47365 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:01.643353   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:01.643435   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.646320   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.646821   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.646881   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.647001   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.647206   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647407   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647555   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.647721   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.648105   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.648135   47365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:01.996428   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:01.996456   47365 machine.go:91] provisioned docker machine in 1.067968652s
	I1205 20:52:01.996468   47365 start.go:300] post-start starting for "default-k8s-diff-port-463614" (driver="kvm2")
	I1205 20:52:01.996482   47365 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:01.996502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:01.996804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:01.996829   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.999880   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000345   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.000378   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.000733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.000872   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.001041   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.088194   47365 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:02.092422   47365 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:02.092447   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:02.092522   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:02.092607   47365 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:02.092692   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:02.100847   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:02.125282   47365 start.go:303] post-start completed in 128.798422ms
	I1205 20:52:02.125308   47365 fix.go:56] fixHost completed within 20.234129302s
	I1205 20:52:02.125334   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.128159   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128506   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.128539   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.128970   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129157   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129330   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.129505   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:02.129980   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:02.130001   47365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:02.247134   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809522.185244520
	
	I1205 20:52:02.247160   47365 fix.go:206] guest clock: 1701809522.185244520
	I1205 20:52:02.247170   47365 fix.go:219] Guest: 2023-12-05 20:52:02.18524452 +0000 UTC Remote: 2023-12-05 20:52:02.125313647 +0000 UTC m=+165.907305797 (delta=59.930873ms)
	I1205 20:52:02.247193   47365 fix.go:190] guest clock delta is within tolerance: 59.930873ms
	I1205 20:52:02.247199   47365 start.go:83] releasing machines lock for "default-k8s-diff-port-463614", held for 20.356057608s
	I1205 20:52:02.247233   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.247561   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:02.250476   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.250918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.250952   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.251123   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.251833   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252026   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252117   47365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:02.252168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.252434   47365 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:02.252461   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.255221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255382   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.255750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.255949   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.256004   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.256060   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256278   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.256288   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256453   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256447   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.256586   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256698   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.343546   47365 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:02.368171   47365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:02.518472   47365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:02.524733   47365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:02.524808   47365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:02.541607   47365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:02.541632   47365 start.go:475] detecting cgroup driver to use...
	I1205 20:52:02.541703   47365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:02.560122   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:02.575179   47365 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:02.575244   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:02.591489   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:02.606022   47365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:02.711424   47365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:02.828436   47365 docker.go:219] disabling docker service ...
	I1205 20:52:02.828515   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:02.844209   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:02.860693   47365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:02.979799   47365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:03.111682   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:03.128706   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:03.147984   47365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:03.148057   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.160998   47365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:03.161068   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.173347   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.185126   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.195772   47365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:03.206308   47365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:03.215053   47365 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:03.215103   47365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:03.227755   47365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:03.237219   47365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:03.369712   47365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:03.561508   47365 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:03.561575   47365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:03.569369   47365 start.go:543] Will wait 60s for crictl version
	I1205 20:52:03.569437   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:52:03.575388   47365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:03.618355   47365 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:03.618458   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.670174   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.716011   47365 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:02.272006   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Start
	I1205 20:52:02.272171   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring networks are active...
	I1205 20:52:02.272890   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network default is active
	I1205 20:52:02.273264   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network mk-embed-certs-331495 is active
	I1205 20:52:02.273634   46374 main.go:141] libmachine: (embed-certs-331495) Getting domain xml...
	I1205 20:52:02.274223   46374 main.go:141] libmachine: (embed-certs-331495) Creating domain...
	I1205 20:52:03.644135   46374 main.go:141] libmachine: (embed-certs-331495) Waiting to get IP...
	I1205 20:52:03.645065   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.645451   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.645561   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.645439   48036 retry.go:31] will retry after 246.973389ms: waiting for machine to come up
	I1205 20:52:03.894137   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.894708   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.894813   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.894768   48036 retry.go:31] will retry after 353.753964ms: waiting for machine to come up
	I1205 20:52:04.250496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.251201   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.251231   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.251151   48036 retry.go:31] will retry after 370.705045ms: waiting for machine to come up
	I1205 20:52:04.623959   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.624532   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.624563   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.624488   48036 retry.go:31] will retry after 409.148704ms: waiting for machine to come up
	I1205 20:52:05.035991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.036492   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.036521   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.036458   48036 retry.go:31] will retry after 585.089935ms: waiting for machine to come up
	I1205 20:52:01.272757   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.865397348s)
	I1205 20:52:01.272791   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1205 20:52:01.272823   46866 cache_images.go:123] Successfully loaded all cached images
	I1205 20:52:01.272830   46866 cache_images.go:92] LoadImages completed in 17.860858219s
	I1205 20:52:01.272913   46866 ssh_runner.go:195] Run: crio config
	I1205 20:52:01.346651   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:01.346671   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:01.346689   46866 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:01.346715   46866 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.162 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143651 NodeName:no-preload-143651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:01.346890   46866 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143651"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:01.347005   46866 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:01.347080   46866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1205 20:52:01.360759   46866 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:01.360818   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:01.372537   46866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1205 20:52:01.389057   46866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1205 20:52:01.405689   46866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1205 20:52:01.426066   46866 ssh_runner.go:195] Run: grep 192.168.61.162	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:01.430363   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:01.443015   46866 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651 for IP: 192.168.61.162
	I1205 20:52:01.443049   46866 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:01.443202   46866 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:01.443254   46866 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:01.443337   46866 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.key
	I1205 20:52:01.443423   46866 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key.5bf94fca
	I1205 20:52:01.443477   46866 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key
	I1205 20:52:01.443626   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:01.443664   46866 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:01.443689   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:01.443729   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:01.443768   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:01.443800   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:01.443868   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:01.444505   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:01.471368   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:01.495925   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:01.520040   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:01.542515   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:01.565061   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:01.592011   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:01.615244   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:01.640425   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:01.666161   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:01.688991   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:01.711978   46866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:01.728642   46866 ssh_runner.go:195] Run: openssl version
	I1205 20:52:01.734248   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:01.746741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751589   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751647   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.757299   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:01.768280   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:01.779234   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783897   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783961   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.789668   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:01.800797   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:01.814741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819713   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819774   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.825538   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:01.836443   46866 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:01.842191   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:01.850025   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:01.857120   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:01.863507   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:01.870887   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:01.878657   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:01.886121   46866 kubeadm.go:404] StartCluster: {Name:no-preload-143651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:01.886245   46866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:01.886311   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:01.933026   46866 cri.go:89] found id: ""
	I1205 20:52:01.933096   46866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:01.946862   46866 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:01.946891   46866 kubeadm.go:636] restartCluster start
	I1205 20:52:01.946950   46866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:01.959468   46866 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.960467   46866 kubeconfig.go:92] found "no-preload-143651" server: "https://192.168.61.162:8443"
	I1205 20:52:01.962804   46866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:01.975351   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.975427   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:01.988408   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.988439   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.988493   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.001669   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:02.502716   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:02.502781   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.515220   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.002777   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.002843   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.016667   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.501748   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.501840   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.515761   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.001797   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.001873   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.018140   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.502697   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.502791   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.518059   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.002414   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.002515   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.021107   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.502637   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.502733   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.521380   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.717595   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:03.720774   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721210   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:03.721242   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721414   47365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:03.726330   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:03.738414   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:03.738479   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:03.777318   47365 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:03.777380   47365 ssh_runner.go:195] Run: which lz4
	I1205 20:52:03.781463   47365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:03.785728   47365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:03.785759   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:05.712791   47365 crio.go:444] Took 1.931355 seconds to copy over tarball
	I1205 20:52:05.712888   47365 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:06.939842   46700 retry.go:31] will retry after 8.345823287s: kubelet not initialised
	I1205 20:52:05.623348   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.623894   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.623928   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.623844   48036 retry.go:31] will retry after 819.796622ms: waiting for machine to come up
	I1205 20:52:06.445034   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:06.445471   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:06.445504   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:06.445427   48036 retry.go:31] will retry after 716.017152ms: waiting for machine to come up
	I1205 20:52:07.162965   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:07.163496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:07.163526   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:07.163445   48036 retry.go:31] will retry after 1.085415508s: waiting for machine to come up
	I1205 20:52:08.250373   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:08.250962   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:08.250999   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:08.250909   48036 retry.go:31] will retry after 1.128069986s: waiting for machine to come up
	I1205 20:52:09.380537   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:09.381001   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:09.381027   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:09.380964   48036 retry.go:31] will retry after 1.475239998s: waiting for machine to come up
	I1205 20:52:06.002168   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.002247   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.025123   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:06.502715   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.502831   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.519395   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.001937   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.002068   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.019028   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.501962   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.502059   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.515098   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.002769   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.002909   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.020137   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.501807   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.501949   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.518082   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.002421   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.002505   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.016089   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.502171   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.502261   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.515449   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.001975   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.002117   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.013831   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.502398   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.502481   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.514939   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.946250   47365 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.233316669s)
	I1205 20:52:08.946291   47365 crio.go:451] Took 3.233468 seconds to extract the tarball
	I1205 20:52:08.946304   47365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:08.988526   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:09.041782   47365 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:09.041812   47365 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:09.041908   47365 ssh_runner.go:195] Run: crio config
	I1205 20:52:09.105852   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:09.105879   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:09.105901   47365 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:09.105926   47365 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-463614 NodeName:default-k8s-diff-port-463614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:09.106114   47365 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-463614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:09.106218   47365 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-463614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1205 20:52:09.106295   47365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:09.116476   47365 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:09.116569   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:09.125304   47365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1205 20:52:09.141963   47365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:09.158882   47365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1205 20:52:09.177829   47365 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:09.181803   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:09.194791   47365 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614 for IP: 192.168.39.27
	I1205 20:52:09.194824   47365 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:09.194968   47365 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:09.195028   47365 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:09.195135   47365 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.key
	I1205 20:52:09.195225   47365 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key.310d49ea
	I1205 20:52:09.195287   47365 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key
	I1205 20:52:09.195457   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:09.195502   47365 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:09.195519   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:09.195561   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:09.195594   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:09.195625   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:09.195698   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:09.196495   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:09.221945   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:09.249557   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:09.279843   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:09.309602   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:09.338163   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:09.365034   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:09.394774   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:09.420786   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:09.445787   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:09.474838   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:09.499751   47365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:09.523805   47365 ssh_runner.go:195] Run: openssl version
	I1205 20:52:09.530143   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:09.545184   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550681   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550751   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.558670   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:09.573789   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:09.585134   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591055   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591136   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.597286   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:09.608901   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:09.620949   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626190   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626267   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.632394   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:09.645362   47365 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:09.650768   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:09.657084   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:09.663183   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:09.669093   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:09.675365   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:09.681992   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:09.688849   47365 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:09.688963   47365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:09.689035   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:09.730999   47365 cri.go:89] found id: ""
	I1205 20:52:09.731061   47365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:09.741609   47365 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:09.741640   47365 kubeadm.go:636] restartCluster start
	I1205 20:52:09.741700   47365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:09.751658   47365 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.752671   47365 kubeconfig.go:92] found "default-k8s-diff-port-463614" server: "https://192.168.39.27:8444"
	I1205 20:52:09.755361   47365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:09.765922   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.766006   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.781956   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.781983   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.782033   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.795265   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.295986   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.296088   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.312309   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.795832   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.795959   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.808880   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.857552   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:10.857968   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:10.858002   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:10.857911   48036 retry.go:31] will retry after 1.882319488s: waiting for machine to come up
	I1205 20:52:12.741608   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:12.742051   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:12.742081   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:12.742006   48036 retry.go:31] will retry after 2.598691975s: waiting for machine to come up
	I1205 20:52:15.343818   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:15.344360   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:15.344385   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:15.344306   48036 retry.go:31] will retry after 3.313897625s: waiting for machine to come up
	I1205 20:52:11.002661   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.002740   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.014931   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.502548   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.502621   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.516090   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.975668   46866 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:11.975724   46866 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:11.975739   46866 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:11.975820   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:12.032265   46866 cri.go:89] found id: ""
	I1205 20:52:12.032364   46866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:12.050705   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:12.060629   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:12.060726   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.073988   46866 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.074015   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:12.209842   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.318235   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108353469s)
	I1205 20:52:13.318280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.518224   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.606064   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.695764   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:13.695849   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:13.718394   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.237554   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.737066   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:15.236911   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:11.295662   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.295754   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.308889   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.796322   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.796432   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.812351   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.295433   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.295527   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.308482   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.795889   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.795961   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.812458   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.296017   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.296114   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.312758   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.796111   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.796256   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.812247   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.295726   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.295808   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.308712   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.796358   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.796439   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.813173   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.295541   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.295632   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.312665   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.796231   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.796378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.816767   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.292395   46700 retry.go:31] will retry after 12.309806949s: kubelet not initialised
	I1205 20:52:18.659431   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:18.659915   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:18.659944   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:18.659867   48036 retry.go:31] will retry after 3.672641091s: waiting for machine to come up
	I1205 20:52:15.737064   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.237656   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.263010   46866 api_server.go:72] duration metric: took 2.567245952s to wait for apiserver process to appear ...
	I1205 20:52:16.263039   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:16.263057   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.286115   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.286153   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.286173   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.334683   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.334710   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.835110   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.840833   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:19.840866   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.335444   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.355923   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:20.355956   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.835568   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.840974   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:52:20.849239   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:52:20.849274   46866 api_server.go:131] duration metric: took 4.586226618s to wait for apiserver health ...
	I1205 20:52:20.849284   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:20.849323   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:20.850829   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:16.295650   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.295729   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.312742   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:16.796283   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.796364   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.812822   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.295879   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.295953   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.312254   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.795437   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.795519   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.808598   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.296187   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.296266   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.312808   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.796368   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.796480   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.812986   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.295511   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:19.295576   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:19.308830   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.766569   47365 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:19.766653   47365 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:19.766673   47365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:19.766748   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:19.820510   47365 cri.go:89] found id: ""
	I1205 20:52:19.820590   47365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:19.842229   47365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:19.853234   47365 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:19.853293   47365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866181   47365 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866220   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:20.022098   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.165439   47365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143295704s)
	I1205 20:52:21.165472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:22.333575   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334146   46374 main.go:141] libmachine: (embed-certs-331495) Found IP for machine: 192.168.72.180
	I1205 20:52:22.334189   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has current primary IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334205   46374 main.go:141] libmachine: (embed-certs-331495) Reserving static IP address...
	I1205 20:52:22.334654   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.334686   46374 main.go:141] libmachine: (embed-certs-331495) DBG | skip adding static IP to network mk-embed-certs-331495 - found existing host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"}
	I1205 20:52:22.334699   46374 main.go:141] libmachine: (embed-certs-331495) Reserved static IP address: 192.168.72.180
	I1205 20:52:22.334717   46374 main.go:141] libmachine: (embed-certs-331495) Waiting for SSH to be available...
	I1205 20:52:22.334727   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Getting to WaitForSSH function...
	I1205 20:52:22.337411   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337832   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.337863   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337976   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH client type: external
	I1205 20:52:22.338005   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa (-rw-------)
	I1205 20:52:22.338038   46374 main.go:141] libmachine: (embed-certs-331495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:22.338057   46374 main.go:141] libmachine: (embed-certs-331495) DBG | About to run SSH command:
	I1205 20:52:22.338071   46374 main.go:141] libmachine: (embed-certs-331495) DBG | exit 0
	I1205 20:52:22.430984   46374 main.go:141] libmachine: (embed-certs-331495) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:22.431374   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetConfigRaw
	I1205 20:52:22.432120   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.435317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.435737   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.435772   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.436044   46374 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/config.json ...
	I1205 20:52:22.436283   46374 machine.go:88] provisioning docker machine ...
	I1205 20:52:22.436304   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:22.436519   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436687   46374 buildroot.go:166] provisioning hostname "embed-certs-331495"
	I1205 20:52:22.436707   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436882   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.439595   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.439966   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.439998   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.440179   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.440392   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440558   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440718   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.440891   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.441216   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.441235   46374 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-331495 && echo "embed-certs-331495" | sudo tee /etc/hostname
	I1205 20:52:22.584600   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-331495
	
	I1205 20:52:22.584662   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.587640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588053   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.588083   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588255   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.588469   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.588985   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.589340   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.589369   46374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-331495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-331495/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-331495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:22.722352   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:22.722390   46374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:22.722437   46374 buildroot.go:174] setting up certificates
	I1205 20:52:22.722459   46374 provision.go:83] configureAuth start
	I1205 20:52:22.722475   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.722776   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.725826   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726254   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.726313   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726616   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.729267   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729606   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.729640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729798   46374 provision.go:138] copyHostCerts
	I1205 20:52:22.729843   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:22.729853   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:22.729907   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:22.729986   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:22.729994   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:22.730019   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:22.730090   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:22.730100   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:22.730128   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:22.730188   46374 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.embed-certs-331495 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-331495]
	I1205 20:52:22.795361   46374 provision.go:172] copyRemoteCerts
	I1205 20:52:22.795435   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:22.795464   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.798629   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799006   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.799052   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799222   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.799448   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.799617   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.799774   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:22.892255   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:52:22.929940   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:52:22.966087   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:22.998887   46374 provision.go:86] duration metric: configureAuth took 276.409362ms
	I1205 20:52:22.998937   46374 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:22.999160   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:22.999253   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.002604   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.002992   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.003033   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.003265   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.003516   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003723   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003916   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.004090   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.004540   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.004568   46374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:23.371418   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:23.371450   46374 machine.go:91] provisioned docker machine in 935.149228ms
	I1205 20:52:23.371464   46374 start.go:300] post-start starting for "embed-certs-331495" (driver="kvm2")
	I1205 20:52:23.371477   46374 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:23.371500   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.371872   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:23.371911   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.375440   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.375960   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.375991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.376130   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.376328   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.376512   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.376693   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.472304   46374 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:23.477044   46374 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:23.477070   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:23.477177   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:23.477287   46374 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:23.477425   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:23.493987   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:23.519048   46374 start.go:303] post-start completed in 147.566985ms
	I1205 20:52:23.519082   46374 fix.go:56] fixHost completed within 21.27172194s
	I1205 20:52:23.519107   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.522260   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522700   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.522735   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522967   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.523238   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523456   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.523893   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.524220   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.524239   46374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:23.648717   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809543.591713401
	
	I1205 20:52:23.648743   46374 fix.go:206] guest clock: 1701809543.591713401
	I1205 20:52:23.648755   46374 fix.go:219] Guest: 2023-12-05 20:52:23.591713401 +0000 UTC Remote: 2023-12-05 20:52:23.519087629 +0000 UTC m=+358.020977056 (delta=72.625772ms)
	I1205 20:52:23.648800   46374 fix.go:190] guest clock delta is within tolerance: 72.625772ms
	I1205 20:52:23.648808   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 21.401495157s
	I1205 20:52:23.648838   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.649149   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:23.652098   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652534   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.652577   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652773   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653350   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653552   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653655   46374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:23.653709   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.653948   46374 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:23.653989   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.657266   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657547   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657637   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657669   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657946   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657957   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.657970   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.658236   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.658250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658438   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658532   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658756   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.658785   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658933   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.777965   46374 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:23.784199   46374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:23.948621   46374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:23.957081   46374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:23.957163   46374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:23.978991   46374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:23.979023   46374 start.go:475] detecting cgroup driver to use...
	I1205 20:52:23.979124   46374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:23.997195   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:24.015420   46374 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:24.015494   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:24.031407   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:24.047587   46374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:24.200996   46374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:24.332015   46374 docker.go:219] disabling docker service ...
	I1205 20:52:24.332095   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:24.350586   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:24.367457   46374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:24.545467   46374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:24.733692   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:24.748391   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:24.768555   46374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:24.768644   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.780668   46374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:24.780740   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.792671   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.806500   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.818442   46374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:24.829822   46374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:24.842070   46374 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:24.842138   46374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:24.857370   46374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:24.867993   46374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:25.024629   46374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:25.231556   46374 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:25.231630   46374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:25.237863   46374 start.go:543] Will wait 60s for crictl version
	I1205 20:52:25.237929   46374 ssh_runner.go:195] Run: which crictl
	I1205 20:52:25.242501   46374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:25.289507   46374 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:25.289591   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.340432   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.398354   46374 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:25.399701   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:25.402614   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.402997   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:25.403029   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.403259   46374 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:25.407873   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:25.420725   46374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:25.420801   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:25.468651   46374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:25.468726   46374 ssh_runner.go:195] Run: which lz4
	I1205 20:52:25.473976   46374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:25.478835   46374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:25.478871   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:20.852220   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:20.867614   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:20.892008   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:20.912985   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:20.913027   46866 system_pods.go:61] "coredns-76f75df574-8d24t" [10265d3b-ddf0-4559-8194-d42563df88a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:20.913038   46866 system_pods.go:61] "etcd-no-preload-143651" [a6b62f23-a944-41ec-b465-6027fcf1f413] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:20.913051   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [5a6b5874-6c6b-4ed6-aa68-8e7fc35a486e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:20.913061   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [42b01d8c-2d8f-467e-8183-eef2e6f73b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:20.913074   46866 system_pods.go:61] "kube-proxy-mltvl" [9adea5d0-e824-40ff-b5b4-16f84fd439ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:20.913085   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [17474fca-8390-48db-bebe-47c1e2cf7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:20.913107   46866 system_pods.go:61] "metrics-server-57f55c9bc5-mhxpn" [3eb25a58-bea3-4266-9bf8-8f186ee65e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:20.913120   46866 system_pods.go:61] "storage-provisioner" [cfe9d24c-a534-4778-980b-99f7addcf0b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:20.913132   46866 system_pods.go:74] duration metric: took 21.101691ms to wait for pod list to return data ...
	I1205 20:52:20.913143   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:20.917108   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:20.917140   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:20.917156   46866 node_conditions.go:105] duration metric: took 4.003994ms to run NodePressure ...
	I1205 20:52:20.917180   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.315507   46866 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321271   46866 kubeadm.go:787] kubelet initialised
	I1205 20:52:21.321301   46866 kubeadm.go:788] duration metric: took 5.763416ms waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321310   46866 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:21.327760   46866 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:23.354192   46866 pod_ready.go:102] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:25.353274   46866 pod_ready.go:92] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:25.353356   46866 pod_ready.go:81] duration metric: took 4.02555842s waiting for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:25.353372   46866 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:21.402472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.498902   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.585971   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:21.586073   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:21.605993   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.120378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.620326   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.119466   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.619549   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.120228   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.143130   47365 api_server.go:72] duration metric: took 2.557157382s to wait for apiserver process to appear ...
	I1205 20:52:24.143163   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:24.143182   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:27.608165   46700 retry.go:31] will retry after 7.717398196s: kubelet not initialised
	I1205 20:52:28.335417   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:28.335446   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:28.335457   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.429478   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.429507   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:28.929996   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.936475   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.936525   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.430308   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.437787   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:29.437838   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.930326   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.942625   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:52:29.953842   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:29.953875   47365 api_server.go:131] duration metric: took 5.810704359s to wait for apiserver health ...
	I1205 20:52:29.953889   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:29.953904   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:29.955505   47365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:27.326223   46374 crio.go:444] Took 1.852284 seconds to copy over tarball
	I1205 20:52:27.326333   46374 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:27.374784   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:29.378733   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:30.375181   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:30.375266   46866 pod_ready.go:81] duration metric: took 5.021883955s waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.375316   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:29.956914   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:29.981391   47365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:30.016634   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:30.030957   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:30.031030   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:30.031047   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:30.031069   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:30.031088   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:30.031117   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:30.031135   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:30.031148   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:30.031165   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:30.031177   47365 system_pods.go:74] duration metric: took 14.513879ms to wait for pod list to return data ...
	I1205 20:52:30.031190   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:30.035458   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:30.035493   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:30.035506   47365 node_conditions.go:105] duration metric: took 4.295594ms to run NodePressure ...
	I1205 20:52:30.035525   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:30.302125   47365 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307852   47365 kubeadm.go:787] kubelet initialised
	I1205 20:52:30.307875   47365 kubeadm.go:788] duration metric: took 5.724991ms waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307883   47365 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:30.316621   47365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.323682   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323716   47365 pod_ready.go:81] duration metric: took 7.060042ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.323728   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323736   47365 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.338909   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338945   47365 pod_ready.go:81] duration metric: took 15.198541ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.338967   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338977   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.349461   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349491   47365 pod_ready.go:81] duration metric: took 10.504515ms waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.349505   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349513   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.422520   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422553   47365 pod_ready.go:81] duration metric: took 73.030993ms waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.422569   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422588   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.212527   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212553   47365 pod_ready.go:81] duration metric: took 789.956497ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.212564   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212575   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.727110   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727140   47365 pod_ready.go:81] duration metric: took 514.553589ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.727154   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727162   47365 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.168658   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168695   47365 pod_ready.go:81] duration metric: took 441.52358ms waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:32.168711   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168720   47365 pod_ready.go:38] duration metric: took 1.860826751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:32.168747   47365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:52:32.182053   47365 ops.go:34] apiserver oom_adj: -16
	I1205 20:52:32.182075   47365 kubeadm.go:640] restartCluster took 22.440428452s
	I1205 20:52:32.182083   47365 kubeadm.go:406] StartCluster complete in 22.493245354s
	I1205 20:52:32.182130   47365 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.182208   47365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:52:32.184035   47365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.290773   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:52:32.290931   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:32.290921   47365 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:52:32.291055   47365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291079   47365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291088   47365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291099   47365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-463614"
	I1205 20:52:32.291123   47365 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291133   47365 addons.go:240] addon metrics-server should already be in state true
	I1205 20:52:32.291177   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291093   47365 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291220   47365 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:52:32.291298   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291586   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291607   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291633   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291635   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291713   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291739   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.311298   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I1205 20:52:32.311514   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I1205 20:52:32.311541   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1205 20:52:32.311733   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.311932   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312026   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312291   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312325   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312434   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312456   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312487   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312501   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312688   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312763   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312833   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.313276   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313300   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.313359   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313390   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.316473   47365 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.316493   47365 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:52:32.316520   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.317093   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.317125   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.328598   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I1205 20:52:32.329097   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.329225   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I1205 20:52:32.329589   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.329608   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.329674   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.330230   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.330248   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.330298   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330484   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330553   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330719   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330908   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I1205 20:52:32.331201   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.331935   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.331953   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.332351   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.332472   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.332653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.512055   47365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:52:32.333098   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.511993   47365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:52:32.536814   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:52:32.512201   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.536942   47365 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.536958   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:52:32.536985   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.536843   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:52:32.537043   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.541412   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541780   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541924   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.541958   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542190   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542369   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.542394   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542434   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.542641   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.542748   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542905   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.542939   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.543088   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.543246   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.554014   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I1205 20:52:32.554513   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.554975   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.555007   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.555387   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.555634   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.557606   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.557895   47365 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.557911   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:52:32.557936   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.561075   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.561553   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.561942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.562135   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.562338   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.673513   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.682442   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:52:32.682472   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:52:32.706007   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.726379   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:52:32.726413   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:52:32.779247   47365 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:52:32.780175   47365 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-463614" context rescaled to 1 replicas
	I1205 20:52:32.780220   47365 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:52:32.787518   47365 out.go:177] * Verifying Kubernetes components...
	I1205 20:52:32.790046   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:32.796219   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:32.796248   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:52:32.854438   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:34.594203   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.920648219s)
	I1205 20:52:34.594267   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594294   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.888240954s)
	I1205 20:52:34.594331   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594343   47365 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.80425984s)
	I1205 20:52:34.594373   47365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:34.594350   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594710   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594729   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.594755   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594772   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594783   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594801   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594754   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594860   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.595134   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595195   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595229   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595238   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.595356   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595375   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.610358   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.610390   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.610651   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.610677   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689242   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.834763203s)
	I1205 20:52:34.689294   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689309   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.689648   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.689698   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.689717   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689740   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.690020   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.690025   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.690035   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.690046   47365 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-463614"
	I1205 20:52:34.692072   47365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 20:52:30.639619   46374 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.313251826s)
	I1205 20:52:30.641314   46374 crio.go:451] Took 3.315054 seconds to extract the tarball
	I1205 20:52:30.641328   46374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:30.687076   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:30.745580   46374 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:30.745603   46374 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:30.745681   46374 ssh_runner.go:195] Run: crio config
	I1205 20:52:30.807631   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:30.807656   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:30.807674   46374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:30.807692   46374 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-331495 NodeName:embed-certs-331495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:30.807828   46374 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-331495"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:30.807897   46374 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-331495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:30.807958   46374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:30.820571   46374 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:30.820679   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:30.831881   46374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:52:30.852058   46374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:30.870516   46374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1205 20:52:30.888000   46374 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:30.892529   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:30.904910   46374 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495 for IP: 192.168.72.180
	I1205 20:52:30.904950   46374 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:30.905143   46374 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:30.905197   46374 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:30.905280   46374 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/client.key
	I1205 20:52:30.905336   46374 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key.379caec1
	I1205 20:52:30.905368   46374 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key
	I1205 20:52:30.905463   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:30.905489   46374 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:30.905499   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:30.905525   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:30.905550   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:30.905572   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:30.905609   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:30.906129   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:30.930322   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:30.953120   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:30.976792   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:31.000462   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:31.025329   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:31.050451   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:31.075644   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:31.101693   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:31.125712   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:31.149721   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:31.173466   46374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:31.191836   46374 ssh_runner.go:195] Run: openssl version
	I1205 20:52:31.197909   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:31.212206   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219081   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219155   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.225423   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:31.239490   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:31.251505   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256613   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256678   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.262730   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:31.274879   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:31.286201   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291593   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291658   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.298904   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:31.310560   46374 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:31.315670   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:31.322461   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:31.328590   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:31.334580   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:31.341827   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:31.348456   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:31.354835   46374 kubeadm.go:404] StartCluster: {Name:embed-certs-331495 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:31.354945   46374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:31.355024   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:31.396272   46374 cri.go:89] found id: ""
	I1205 20:52:31.396346   46374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:31.406603   46374 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:31.406629   46374 kubeadm.go:636] restartCluster start
	I1205 20:52:31.406683   46374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:31.417671   46374 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.419068   46374 kubeconfig.go:92] found "embed-certs-331495" server: "https://192.168.72.180:8443"
	I1205 20:52:31.421304   46374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:31.432188   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.432260   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.445105   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.445132   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.445182   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.457857   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.958205   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.958322   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.972477   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.458645   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.458732   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.475471   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.958778   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.958872   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.973340   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.458838   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.458924   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.475090   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.958680   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.958776   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.974789   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.458297   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.458371   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.471437   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.958961   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.959030   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.972007   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:35.458648   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.458729   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.471573   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.362684   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.362706   46866 pod_ready.go:81] duration metric: took 1.98737949s waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.362715   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368694   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.368717   46866 pod_ready.go:81] duration metric: took 5.993796ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368726   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375418   46866 pod_ready.go:92] pod "kube-proxy-mltvl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.375442   46866 pod_ready.go:81] duration metric: took 6.709035ms waiting for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375452   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383393   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.383418   46866 pod_ready.go:81] duration metric: took 7.957397ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383430   46866 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:34.497914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:34.693693   47365 addons.go:502] enable addons completed in 2.40279745s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 20:52:35.331317   46700 retry.go:31] will retry after 13.122920853s: kubelet not initialised
	I1205 20:52:35.958930   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.959020   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.971607   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.458135   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.458202   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.475097   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.958621   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.958703   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.974599   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.458670   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.458790   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.472296   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.958470   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.958561   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.971241   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.458862   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.458957   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.471475   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.958727   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.958807   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.971366   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.458991   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.459084   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.471352   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.958955   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.959052   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.972803   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:40.458181   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.458251   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.470708   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.499335   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:38.996779   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:36.611450   47365 node_ready.go:58] node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:39.111234   47365 node_ready.go:49] node "default-k8s-diff-port-463614" has status "Ready":"True"
	I1205 20:52:39.111266   47365 node_ready.go:38] duration metric: took 4.51686489s waiting for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:39.111278   47365 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:39.117815   47365 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124431   47365 pod_ready.go:92] pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.124455   47365 pod_ready.go:81] duration metric: took 6.615213ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124464   47365 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131301   47365 pod_ready.go:92] pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.131340   47365 pod_ready.go:81] duration metric: took 6.85604ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131352   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:41.155265   47365 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:40.958830   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.958921   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.970510   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:41.432806   46374 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:41.432840   46374 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:41.432854   46374 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:41.432909   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:41.476486   46374 cri.go:89] found id: ""
	I1205 20:52:41.476550   46374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:41.493676   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:41.503594   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:41.503681   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512522   46374 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512550   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:41.645081   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.368430   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.586289   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.657555   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.753020   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:42.753103   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:42.767926   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.286111   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.786148   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.285601   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.785638   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.285508   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.326812   46374 api_server.go:72] duration metric: took 2.573794156s to wait for apiserver process to appear ...
	I1205 20:52:45.326839   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:45.326857   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327337   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:45.327367   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327771   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:40.998702   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:43.508882   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:42.152898   47365 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:42.152926   47365 pod_ready.go:81] duration metric: took 3.021552509s waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:42.152939   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320531   47365 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.320632   47365 pod_ready.go:81] duration metric: took 1.167680941s waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320660   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521255   47365 pod_ready.go:92] pod "kube-proxy-g4zct" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.521286   47365 pod_ready.go:81] duration metric: took 200.606753ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521300   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911946   47365 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.911972   47365 pod_ready.go:81] duration metric: took 390.664131ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911983   47365 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:46.220630   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.459426   46700 kubeadm.go:787] kubelet initialised
	I1205 20:52:48.459452   46700 kubeadm.go:788] duration metric: took 53.977281861s waiting for restarted kubelet to initialise ...
	I1205 20:52:48.459460   46700 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:48.465332   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471155   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.471184   46700 pod_ready.go:81] duration metric: took 5.815983ms waiting for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471195   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476833   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.476861   46700 pod_ready.go:81] duration metric: took 5.658311ms waiting for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476876   46700 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481189   46700 pod_ready.go:92] pod "etcd-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.481217   46700 pod_ready.go:81] duration metric: took 4.332284ms waiting for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481230   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485852   46700 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.485869   46700 pod_ready.go:81] duration metric: took 4.630813ms waiting for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485879   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:45.828213   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.185115   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.185143   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.185156   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.228977   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.229017   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.328278   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.336930   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.336971   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:49.828530   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.835188   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.835215   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:50.328834   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.337852   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:50.337885   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:45.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:47.998466   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.497317   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.828313   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.835050   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:52:50.844093   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:50.844124   46374 api_server.go:131] duration metric: took 5.517278039s to wait for apiserver health ...
	I1205 20:52:50.844134   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:50.844141   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:50.846047   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:48.220942   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.720446   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.858954   46700 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.858980   46700 pod_ready.go:81] duration metric: took 373.093905ms waiting for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.858989   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260468   46700 pod_ready.go:92] pod "kube-proxy-r5n6g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.260493   46700 pod_ready.go:81] duration metric: took 401.497792ms waiting for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260501   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658952   46700 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.658977   46700 pod_ready.go:81] duration metric: took 398.469864ms waiting for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658986   46700 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:51.966947   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.848285   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:50.865469   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:50.918755   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:50.951671   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:50.951705   46374 system_pods.go:61] "coredns-5dd5756b68-7xr6w" [8300dbf8-413a-4171-9e56-53f0f2d03fd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:50.951712   46374 system_pods.go:61] "etcd-embed-certs-331495" [b2802bcb-262e-4d2a-9589-b1b3885de515] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:50.951722   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [6f9a28a7-8827-4071-8c68-f2671e7a8017] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:50.951738   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [24e85887-7f58-4a5c-b0d4-4eebd6076a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:50.951744   46374 system_pods.go:61] "kube-proxy-76qq2" [ffd744ec-9522-443c-b609-b11e24ab9b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:50.951750   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [aaa502dc-a7cf-4f76-b79f-aa8be1ae48f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:50.951756   46374 system_pods.go:61] "metrics-server-57f55c9bc5-bcg28" [e60503c2-732d-44a3-b5da-fbf7a0cfd981] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:50.951761   46374 system_pods.go:61] "storage-provisioner" [be1aa61b-82e9-4382-ab1c-89e30b801fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:50.951767   46374 system_pods.go:74] duration metric: took 32.973877ms to wait for pod list to return data ...
	I1205 20:52:50.951773   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:50.971413   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:50.971440   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:50.971449   46374 node_conditions.go:105] duration metric: took 19.672668ms to run NodePressure ...
	I1205 20:52:50.971465   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:51.378211   46374 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383462   46374 kubeadm.go:787] kubelet initialised
	I1205 20:52:51.383487   46374 kubeadm.go:788] duration metric: took 5.246601ms waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383495   46374 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:51.393558   46374 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:53.414801   46374 pod_ready.go:102] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.426681   46374 pod_ready.go:92] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:55.426710   46374 pod_ready.go:81] duration metric: took 4.033124274s waiting for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:55.426725   46374 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:52.498509   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.997539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:53.221825   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.723682   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.468896   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:56.966471   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.468158   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.469797   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.497582   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.500937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.727756   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.727968   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.466541   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469387   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469996   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.968435   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.969033   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.969065   46374 pod_ready.go:81] duration metric: took 9.542324599s waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.969073   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975019   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.975041   46374 pod_ready.go:81] duration metric: took 5.961268ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975049   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980743   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.980771   46374 pod_ready.go:81] duration metric: took 5.713974ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980779   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985565   46374 pod_ready.go:92] pod "kube-proxy-76qq2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.985596   46374 pod_ready.go:81] duration metric: took 4.805427ms waiting for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985610   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992009   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.992035   46374 pod_ready.go:81] duration metric: took 6.416324ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992047   46374 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:01.996877   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.997311   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:02.221319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.720314   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.966830   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.465943   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:07.272848   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.272897   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:05.997810   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.497408   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.722608   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.222226   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.965894   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.967253   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.466458   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.773608   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.773778   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.997547   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:12.999476   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.496736   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.721128   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.721371   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.221780   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.466602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.965160   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.272951   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.772527   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.497284   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.498006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.223073   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.724402   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.966424   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.466866   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.772789   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.273369   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:21.997270   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.496150   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:23.221999   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.223587   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.967755   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.465568   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.772596   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:30.273464   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:26.496470   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.003099   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.721654   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.724134   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.466332   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.966465   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.773521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:35.272236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.497006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.000663   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.221725   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.719806   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.466035   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.966501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:37.773436   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.274255   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.496949   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.996265   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.721339   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.723854   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.221087   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:39.465585   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.465785   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.467239   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:42.773263   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:44.773717   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.998588   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.496904   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.497783   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.222148   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.722122   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.966317   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.966572   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.272412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:49.273057   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.997444   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.496708   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.722350   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.219843   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.467523   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.967357   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:51.773424   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:53.775574   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.499839   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.997448   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.222442   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.719693   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:55.466751   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:57.966602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.271805   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.272923   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.273306   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.998244   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:59.498440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.720684   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.729688   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.220861   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.466162   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.966846   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.773903   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.271747   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.995748   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:04.002522   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:03.723212   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.224289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.465907   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.466264   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.272960   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.274281   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.497442   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.997440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.721146   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:10.724743   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.966368   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.966796   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.772305   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.772470   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.496229   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.497913   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.221912   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.722076   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:14.467708   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:16.965932   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.773481   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:17.774552   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.273733   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.998027   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.496453   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.497053   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.223289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.722234   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.966869   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:21.465921   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:23.466328   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.272550   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.497084   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:24.498177   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.727882   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.221485   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.966388   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.466553   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.772616   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.773188   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:26.997209   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.997776   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.721711   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.722528   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:30.964854   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.966383   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.272612   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.275600   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:31.498601   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:33.997450   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.220641   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.222232   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.476491   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.968512   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.772248   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.272991   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.997574   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.999016   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.501116   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.723179   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.220182   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.469607   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.968860   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.274044   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.502208   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:44.997516   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.720811   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.721757   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.725689   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.466766   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.966704   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.773511   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.273161   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.274031   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.497342   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:49.502501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.223549   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.719890   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.465849   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.466157   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.772748   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:55.272781   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:51.997636   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.499333   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.720512   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.721826   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.466519   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.466580   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.274370   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.774179   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.997654   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.497915   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.221713   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.723015   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:58.965289   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:00.966027   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.967557   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.273349   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.773101   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.996491   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:03.996649   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.723123   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.220986   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.224736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.466592   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.966611   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.773180   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.774008   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.997589   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.998076   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.001226   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.720517   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.221172   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.466096   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.467200   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.272981   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.773210   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.496043   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.497518   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.725751   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.219939   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.966795   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:17.466501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.272578   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.273500   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.997861   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.499434   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.221058   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.720978   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.466641   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.965389   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.772109   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.274633   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.997800   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:24.497501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.220292   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.722738   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.966366   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.966799   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.465341   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.773108   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:27.774236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.274971   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:26.997610   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.997753   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.220185   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.220399   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.466026   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.966220   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.772859   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:35.272898   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:31.497899   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:33.500772   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.220696   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.221098   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.222701   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.966787   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.465676   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.775190   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.272006   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.000539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.497044   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.720509   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.730400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:39.468063   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:41.966415   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:42.276412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.772916   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.996937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.496928   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.220575   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.724283   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.465646   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.467000   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.773090   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.273675   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.997477   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:47.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.998126   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.220758   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:50.720911   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.966711   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.468554   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.773277   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:52.501489   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:54.996998   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.221047   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.221493   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.965841   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.965891   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.465977   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.272446   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.772269   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.997565   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.496443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:57.722571   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.724736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.466069   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.966747   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.772715   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.271368   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.274084   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:01.498102   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.498428   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.220645   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.720012   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.966850   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.467719   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.772997   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.273279   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.998642   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:08.001018   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.496939   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:06.721938   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.219709   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:11.220579   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.968249   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.465039   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.773538   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.272696   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.500855   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.996837   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:13.725252   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.725522   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.465989   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:16.966908   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.273749   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.772650   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.496107   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.496914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:18.224365   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:20.720429   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.465513   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.967092   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.775353   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:24.277586   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.498047   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.999733   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.219319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:25.222340   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.967374   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.465973   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.468481   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.772514   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.774642   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.496794   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.498446   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:27.723499   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.222748   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.965650   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.967183   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.777450   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:33.276381   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.999443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.384081   46866 pod_ready.go:81] duration metric: took 4m0.000635015s waiting for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:32.384115   46866 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:32.384132   46866 pod_ready.go:38] duration metric: took 4m11.062812404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:32.384156   46866 kubeadm.go:640] restartCluster took 4m30.437260197s
	W1205 20:56:32.384250   46866 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:32.384280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:32.721610   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.220186   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.467452   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.966451   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.773516   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.773737   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.273185   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.221794   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:39.722400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.466005   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.467531   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.773790   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:45.272396   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:41.722481   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.734080   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.912982   47365 pod_ready.go:81] duration metric: took 4m0.000982583s waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:43.913024   47365 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:43.913038   47365 pod_ready.go:38] duration metric: took 4m4.801748698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:43.913063   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:56:43.913101   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:43.913175   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:43.965196   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:43.965220   47365 cri.go:89] found id: ""
	I1205 20:56:43.965228   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:43.965272   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:43.970257   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:43.970353   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:44.026974   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.027005   47365 cri.go:89] found id: ""
	I1205 20:56:44.027015   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:44.027099   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.032107   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:44.032212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:44.075721   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:44.075758   47365 cri.go:89] found id: ""
	I1205 20:56:44.075766   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:44.075823   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.082125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:44.082212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:44.125099   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:44.125122   47365 cri.go:89] found id: ""
	I1205 20:56:44.125129   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:44.125171   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.129477   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:44.129538   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:44.180281   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.180305   47365 cri.go:89] found id: ""
	I1205 20:56:44.180313   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:44.180357   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.185094   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:44.185173   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:44.228693   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.228719   47365 cri.go:89] found id: ""
	I1205 20:56:44.228730   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:44.228786   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.233574   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:44.233687   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:44.279286   47365 cri.go:89] found id: ""
	I1205 20:56:44.279312   47365 logs.go:284] 0 containers: []
	W1205 20:56:44.279321   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:44.279328   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:44.279390   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:44.333572   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.333598   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:44.333605   47365 cri.go:89] found id: ""
	I1205 20:56:44.333614   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:44.333678   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.339080   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.343653   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:44.343687   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.412744   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:44.412785   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.457374   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:44.457402   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.521640   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:44.521676   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:44.536612   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:44.536636   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.586795   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:44.586836   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:45.065254   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:45.065293   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:45.126209   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:45.126242   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:45.166553   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:45.166580   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:45.214849   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:45.214887   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:45.371687   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:45.371732   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:45.417585   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:45.417615   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:45.455524   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:45.455559   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:44.965462   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.967433   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:47.272958   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.274398   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.621173   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.236869123s)
	I1205 20:56:46.621264   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:46.636086   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:46.647003   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:46.657201   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:46.657241   46866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:56:46.882231   46866 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:48.007463   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:56:48.023675   47365 api_server.go:72] duration metric: took 4m15.243410399s to wait for apiserver process to appear ...
	I1205 20:56:48.023713   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:56:48.023748   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:48.023818   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:48.067278   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.067301   47365 cri.go:89] found id: ""
	I1205 20:56:48.067308   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:48.067359   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.072370   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:48.072446   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:48.118421   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:48.118444   47365 cri.go:89] found id: ""
	I1205 20:56:48.118453   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:48.118509   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.123954   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:48.124019   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:48.173864   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:48.173890   47365 cri.go:89] found id: ""
	I1205 20:56:48.173900   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:48.173955   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.178717   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:48.178790   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:48.221891   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:48.221915   47365 cri.go:89] found id: ""
	I1205 20:56:48.221924   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:48.221985   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.226811   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:48.226886   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:48.271431   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:48.271454   47365 cri.go:89] found id: ""
	I1205 20:56:48.271463   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:48.271518   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.276572   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:48.276655   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:48.326438   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:48.326466   47365 cri.go:89] found id: ""
	I1205 20:56:48.326476   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:48.326534   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.334539   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:48.334611   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:48.377929   47365 cri.go:89] found id: ""
	I1205 20:56:48.377955   47365 logs.go:284] 0 containers: []
	W1205 20:56:48.377965   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:48.377973   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:48.378035   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:48.430599   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:48.430621   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:48.430629   47365 cri.go:89] found id: ""
	I1205 20:56:48.430638   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:48.430691   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.434882   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.439269   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:48.439299   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.495069   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:48.495113   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:48.955220   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:48.955257   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:48.971222   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:48.971246   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:49.108437   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:49.108470   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:49.150916   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:49.150940   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:49.207092   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:49.207141   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:49.251940   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:49.251969   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:49.293885   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:49.293918   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:49.349151   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:49.349187   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:49.403042   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:49.403079   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:49.466816   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:49.466858   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:49.525300   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:49.525341   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:49.467873   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.659950   46700 pod_ready.go:81] duration metric: took 4m0.000950283s waiting for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:49.659985   46700 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:49.660008   46700 pod_ready.go:38] duration metric: took 4m1.200539602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:49.660056   46700 kubeadm.go:640] restartCluster took 5m17.548124184s
	W1205 20:56:49.660130   46700 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:49.660162   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:51.776117   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:54.275521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:52.099610   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:56:52.106838   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:56:52.109813   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:56:52.109835   47365 api_server.go:131] duration metric: took 4.086114093s to wait for apiserver health ...
	I1205 20:56:52.109845   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:56:52.109874   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:52.109929   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:52.155290   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:52.155319   47365 cri.go:89] found id: ""
	I1205 20:56:52.155328   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:52.155382   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.160069   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:52.160137   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:52.197857   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.197885   47365 cri.go:89] found id: ""
	I1205 20:56:52.197894   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:52.197956   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.203012   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:52.203075   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:52.257881   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.257904   47365 cri.go:89] found id: ""
	I1205 20:56:52.257914   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:52.257972   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.264817   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:52.264899   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:52.313302   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.313331   47365 cri.go:89] found id: ""
	I1205 20:56:52.313341   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:52.313398   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.318864   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:52.318972   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:52.389306   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.389333   47365 cri.go:89] found id: ""
	I1205 20:56:52.389342   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:52.389400   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.406125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:52.406194   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:52.458735   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:52.458760   47365 cri.go:89] found id: ""
	I1205 20:56:52.458770   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:52.458821   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.463571   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:52.463642   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:52.529035   47365 cri.go:89] found id: ""
	I1205 20:56:52.529067   47365 logs.go:284] 0 containers: []
	W1205 20:56:52.529079   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:52.529088   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:52.529157   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:52.583543   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:52.583578   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.583585   47365 cri.go:89] found id: ""
	I1205 20:56:52.583594   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:52.583649   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.589299   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.595000   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:52.595024   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.671447   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:52.671487   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.719185   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:52.719223   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:52.780173   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:52.780203   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.823808   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:52.823843   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.874394   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:52.874428   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:52.938139   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:52.938177   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.982386   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:52.982414   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:53.029082   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:53.029111   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:53.447057   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:53.447099   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:53.465029   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:53.465066   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:53.627351   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:53.627400   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:53.694357   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:53.694393   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:56.267579   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:56:56.267614   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.267624   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.267631   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.267638   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.267644   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.267650   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.267660   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.267672   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.267683   47365 system_pods.go:74] duration metric: took 4.157830691s to wait for pod list to return data ...
	I1205 20:56:56.267696   47365 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:56:56.271148   47365 default_sa.go:45] found service account: "default"
	I1205 20:56:56.271170   47365 default_sa.go:55] duration metric: took 3.468435ms for default service account to be created ...
	I1205 20:56:56.271176   47365 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:56:56.277630   47365 system_pods.go:86] 8 kube-system pods found
	I1205 20:56:56.277654   47365 system_pods.go:89] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.277660   47365 system_pods.go:89] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.277665   47365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.277669   47365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.277674   47365 system_pods.go:89] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.277679   47365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.277688   47365 system_pods.go:89] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.277696   47365 system_pods.go:89] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.277715   47365 system_pods.go:126] duration metric: took 6.533492ms to wait for k8s-apps to be running ...
	I1205 20:56:56.277726   47365 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:56:56.277772   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:56.296846   47365 system_svc.go:56] duration metric: took 19.109991ms WaitForService to wait for kubelet.
	I1205 20:56:56.296877   47365 kubeadm.go:581] duration metric: took 4m23.516618576s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:56:56.296902   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:56:56.301504   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:56:56.301530   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:56:56.301542   47365 node_conditions.go:105] duration metric: took 4.634882ms to run NodePressure ...
	I1205 20:56:56.301552   47365 start.go:228] waiting for startup goroutines ...
	I1205 20:56:56.301560   47365 start.go:233] waiting for cluster config update ...
	I1205 20:56:56.301573   47365 start.go:242] writing updated cluster config ...
	I1205 20:56:56.301859   47365 ssh_runner.go:195] Run: rm -f paused
	I1205 20:56:56.357189   47365 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:56:56.358798   47365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-463614" cluster and "default" namespace by default
	I1205 20:56:54.756702   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096505481s)
	I1205 20:56:54.756786   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:54.774684   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:54.786308   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:54.796762   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:54.796809   46700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1205 20:56:55.081318   46700 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:58.569752   46866 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1205 20:56:58.569873   46866 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:56:58.569988   46866 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:56:58.570119   46866 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:56:58.570261   46866 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:56:58.570368   46866 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:56:58.572785   46866 out.go:204]   - Generating certificates and keys ...
	I1205 20:56:58.573020   46866 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:56:58.573232   46866 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:56:58.573410   46866 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:56:58.573510   46866 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:56:58.573717   46866 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:56:58.573868   46866 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:56:58.574057   46866 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:56:58.574229   46866 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:56:58.574517   46866 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:56:58.574760   46866 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:56:58.574903   46866 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:56:58.575070   46866 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:56:58.575205   46866 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:56:58.575363   46866 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:56:58.575515   46866 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:56:58.575600   46866 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:56:58.575799   46866 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:56:58.576083   46866 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:56:58.576320   46866 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:56:58.580654   46866 out.go:204]   - Booting up control plane ...
	I1205 20:56:58.581337   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:56:58.581851   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:56:58.582029   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:56:58.582667   46866 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:56:58.582988   46866 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:56:58.583126   46866 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:56:58.583631   46866 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:56:58.583908   46866 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502137 seconds
	I1205 20:56:58.584157   46866 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:56:58.584637   46866 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:56:58.584882   46866 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:56:58.585370   46866 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:56:58.585492   46866 kubeadm.go:322] [bootstrap-token] Using token: fap3k3.pr3uz4d90n7oyvds
	I1205 20:56:58.590063   46866 out.go:204]   - Configuring RBAC rules ...
	I1205 20:56:58.590356   46866 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:56:58.590482   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:56:58.590692   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:56:58.590887   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:56:58.591031   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:56:58.591131   46866 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:56:58.591269   46866 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:56:58.591323   46866 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:56:58.591378   46866 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:56:58.591383   46866 kubeadm.go:322] 
	I1205 20:56:58.591455   46866 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:56:58.591462   46866 kubeadm.go:322] 
	I1205 20:56:58.591554   46866 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:56:58.591559   46866 kubeadm.go:322] 
	I1205 20:56:58.591590   46866 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:56:58.591659   46866 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:56:58.591719   46866 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:56:58.591724   46866 kubeadm.go:322] 
	I1205 20:56:58.591787   46866 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:56:58.591793   46866 kubeadm.go:322] 
	I1205 20:56:58.591848   46866 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:56:58.591853   46866 kubeadm.go:322] 
	I1205 20:56:58.591914   46866 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:56:58.592015   46866 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:56:58.592093   46866 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:56:58.592099   46866 kubeadm.go:322] 
	I1205 20:56:58.592197   46866 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:56:58.592300   46866 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:56:58.592306   46866 kubeadm.go:322] 
	I1205 20:56:58.592403   46866 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592525   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:56:58.592550   46866 kubeadm.go:322] 	--control-plane 
	I1205 20:56:58.592558   46866 kubeadm.go:322] 
	I1205 20:56:58.592645   46866 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:56:58.592650   46866 kubeadm.go:322] 
	I1205 20:56:58.592743   46866 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592870   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:56:58.592880   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:56:58.592889   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:56:58.594456   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:56:56.773764   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.778395   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.595862   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:56:58.625177   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:56:58.683896   46866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:56:58.683977   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.684060   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=no-preload-143651 minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.741242   46866 ops.go:34] apiserver oom_adj: -16
	I1205 20:56:59.114129   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.238212   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.869086   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:00.368538   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.272299   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:03.272604   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:04.992619   46374 pod_ready.go:81] duration metric: took 4m0.000553964s waiting for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:04.992652   46374 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:57:04.992691   46374 pod_ready.go:38] duration metric: took 4m13.609186276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:04.992726   46374 kubeadm.go:640] restartCluster took 4m33.586092425s
	W1205 20:57:04.992782   46374 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:57:04.992808   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:57:00.868500   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.369084   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.368409   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.869341   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.368765   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.869054   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.368855   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.869144   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:05.368635   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.047040   46700 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1205 20:57:09.047132   46700 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:09.047236   46700 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:09.047350   46700 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:09.047462   46700 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:09.047583   46700 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:09.047693   46700 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:09.047752   46700 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1205 20:57:09.047825   46700 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:09.049606   46700 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:09.049706   46700 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:09.049802   46700 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:09.049885   46700 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:09.049963   46700 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:09.050058   46700 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:09.050148   46700 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:09.050235   46700 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:09.050350   46700 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:09.050468   46700 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:09.050563   46700 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:09.050627   46700 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:09.050732   46700 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:09.050817   46700 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:09.050897   46700 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:09.050997   46700 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:09.051080   46700 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:09.051165   46700 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:09.052610   46700 out.go:204]   - Booting up control plane ...
	I1205 20:57:09.052722   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:09.052806   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:09.052870   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:09.052965   46700 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:09.053103   46700 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:09.053203   46700 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005642 seconds
	I1205 20:57:09.053354   46700 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:09.053514   46700 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:09.053563   46700 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:09.053701   46700 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-061206 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 20:57:09.053783   46700 kubeadm.go:322] [bootstrap-token] Using token: syik3l.i77juzhd1iybx3my
	I1205 20:57:09.055286   46700 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:09.055409   46700 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:09.055599   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:09.055749   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:09.055862   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:09.055982   46700 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:09.056043   46700 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:09.056106   46700 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:09.056116   46700 kubeadm.go:322] 
	I1205 20:57:09.056197   46700 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:09.056207   46700 kubeadm.go:322] 
	I1205 20:57:09.056307   46700 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:09.056329   46700 kubeadm.go:322] 
	I1205 20:57:09.056377   46700 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:09.056456   46700 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:09.056533   46700 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:09.056540   46700 kubeadm.go:322] 
	I1205 20:57:09.056600   46700 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:09.056669   46700 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:09.056729   46700 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:09.056737   46700 kubeadm.go:322] 
	I1205 20:57:09.056804   46700 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1205 20:57:09.056868   46700 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:09.056874   46700 kubeadm.go:322] 
	I1205 20:57:09.056944   46700 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057093   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:09.057135   46700 kubeadm.go:322]     --control-plane 	  
	I1205 20:57:09.057150   46700 kubeadm.go:322] 
	I1205 20:57:09.057252   46700 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:09.057260   46700 kubeadm.go:322] 
	I1205 20:57:09.057360   46700 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057502   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:09.057514   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:57:09.057520   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:09.058762   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:05.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.368434   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.869228   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.369175   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.868933   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.369028   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.868920   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.369223   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.869130   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.369240   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.869318   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.369189   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.576975   46866 kubeadm.go:1088] duration metric: took 12.893071134s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:11.577015   46866 kubeadm.go:406] StartCluster complete in 5m9.690903424s
	I1205 20:57:11.577039   46866 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.577129   46866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:11.579783   46866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.580131   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:11.580364   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:57:11.580360   46866 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:11.580446   46866 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143651"
	I1205 20:57:11.580467   46866 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143651"
	W1205 20:57:11.580479   46866 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:11.580518   46866 addons.go:69] Setting metrics-server=true in profile "no-preload-143651"
	I1205 20:57:11.580535   46866 addons.go:231] Setting addon metrics-server=true in "no-preload-143651"
	W1205 20:57:11.580544   46866 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:11.580575   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580583   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580982   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580994   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580497   46866 addons.go:69] Setting default-storageclass=true in profile "no-preload-143651"
	I1205 20:57:11.581018   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581027   46866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143651"
	I1205 20:57:11.581303   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581357   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.581383   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.600887   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1205 20:57:11.600886   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I1205 20:57:11.601552   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601681   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601760   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I1205 20:57:11.602152   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602177   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602260   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.602348   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602370   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602603   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602719   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602806   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.602996   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.603020   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.603329   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.603379   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.603477   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.603997   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.604040   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.606962   46866 addons.go:231] Setting addon default-storageclass=true in "no-preload-143651"
	W1205 20:57:11.606986   46866 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:11.607009   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.607331   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.607363   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.624885   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I1205 20:57:11.625358   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.625857   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.625869   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.626331   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.626627   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I1205 20:57:11.626832   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.627179   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.631282   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I1205 20:57:11.632431   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.632516   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.632599   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.632763   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.633113   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.633639   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.633883   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.634495   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.634539   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.634823   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.637060   46866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:11.635196   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.641902   46866 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:11.641932   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:11.641960   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.642616   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.644862   46866 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:11.647090   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:11.647113   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:11.647134   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.646852   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647539   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.647564   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647755   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.648063   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.648295   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.648520   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.654458   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.654493   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654522   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.654556   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654801   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.655015   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.655247   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.661244   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I1205 20:57:11.661886   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.662508   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.662534   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.663651   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.663907   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.666067   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.666501   46866 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.666523   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:11.666543   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.669659   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670106   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.670132   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670479   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.670673   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.670802   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.670915   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.816687   46866 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143651" context rescaled to 1 replicas
	I1205 20:57:11.816742   46866 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:11.820014   46866 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:09.060305   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:09.069861   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:09.093691   46700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:09.093847   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.093914   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=old-k8s-version-061206 minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.123857   46700 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:09.315555   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.435904   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.049845   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.549703   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.049931   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.549848   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.049776   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.549841   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.050053   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.549531   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.821903   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:11.831116   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:11.867528   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.969463   46866 node_ready.go:35] waiting up to 6m0s for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:11.976207   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:11.976235   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:11.977230   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:12.003110   46866 node_ready.go:49] node "no-preload-143651" has status "Ready":"True"
	I1205 20:57:12.003132   46866 node_ready.go:38] duration metric: took 33.629273ms waiting for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:12.003142   46866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:12.053173   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:12.053208   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:12.140411   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:12.170492   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.170521   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:12.251096   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.778963   46866 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:12.779026   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779040   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779377   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779402   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.779411   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779411   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.779418   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779625   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779665   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.786021   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.786045   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.786331   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.786380   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.786400   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194477   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217217088s)
	I1205 20:57:13.194529   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194543   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.194883   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.194929   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.194948   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194960   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194970   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.195198   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.195212   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562441   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311301688s)
	I1205 20:57:13.562496   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562512   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.562826   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.562845   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562856   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562865   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.563115   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.563164   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.563177   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.563190   46866 addons.go:467] Verifying addon metrics-server=true in "no-preload-143651"
	I1205 20:57:13.564940   46866 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:13.566316   46866 addons.go:502] enable addons completed in 1.985974766s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:14.389400   46866 pod_ready.go:102] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:15.388445   46866 pod_ready.go:92] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.388478   46866 pod_ready.go:81] duration metric: took 3.248030471s waiting for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.388493   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.391728   46866 pod_ready.go:97] error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391759   46866 pod_ready.go:81] duration metric: took 3.251498ms waiting for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:15.391772   46866 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391781   46866 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399725   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.399745   46866 pod_ready.go:81] duration metric: took 7.956804ms waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399759   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407412   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.407436   46866 pod_ready.go:81] duration metric: took 7.672123ms waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407446   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414249   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.414295   46866 pod_ready.go:81] duration metric: took 6.840313ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414309   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587237   46866 pod_ready.go:92] pod "kube-proxy-6txsz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.587271   46866 pod_ready.go:81] duration metric: took 172.95478ms waiting for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587286   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985901   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.985930   46866 pod_ready.go:81] duration metric: took 398.634222ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985943   46866 pod_ready.go:38] duration metric: took 3.982790764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:15.985960   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:15.986019   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:16.009052   46866 api_server.go:72] duration metric: took 4.192253908s to wait for apiserver process to appear ...
	I1205 20:57:16.009082   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:16.009100   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:57:16.014689   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:57:16.015758   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:57:16.015781   46866 api_server.go:131] duration metric: took 6.691652ms to wait for apiserver health ...
	I1205 20:57:16.015791   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:16.188198   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:16.188232   46866 system_pods.go:61] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.188240   46866 system_pods.go:61] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.188246   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.188254   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.188261   46866 system_pods.go:61] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.188267   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.188279   46866 system_pods.go:61] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.188290   46866 system_pods.go:61] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.188301   46866 system_pods.go:74] duration metric: took 172.503422ms to wait for pod list to return data ...
	I1205 20:57:16.188311   46866 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:16.384722   46866 default_sa.go:45] found service account: "default"
	I1205 20:57:16.384759   46866 default_sa.go:55] duration metric: took 196.435091ms for default service account to be created ...
	I1205 20:57:16.384769   46866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:16.587515   46866 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:16.587542   46866 system_pods.go:89] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.587547   46866 system_pods.go:89] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.587554   46866 system_pods.go:89] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.587561   46866 system_pods.go:89] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.587567   46866 system_pods.go:89] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.587574   46866 system_pods.go:89] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.587585   46866 system_pods.go:89] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.587593   46866 system_pods.go:89] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.587604   46866 system_pods.go:126] duration metric: took 202.829744ms to wait for k8s-apps to be running ...
	I1205 20:57:16.587613   46866 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:16.587654   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:16.602489   46866 system_svc.go:56] duration metric: took 14.864421ms WaitForService to wait for kubelet.
	I1205 20:57:16.602521   46866 kubeadm.go:581] duration metric: took 4.785728725s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:16.602545   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:16.785610   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:16.785646   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:16.785663   46866 node_conditions.go:105] duration metric: took 183.112914ms to run NodePressure ...
	I1205 20:57:16.785677   46866 start.go:228] waiting for startup goroutines ...
	I1205 20:57:16.785686   46866 start.go:233] waiting for cluster config update ...
	I1205 20:57:16.785705   46866 start.go:242] writing updated cluster config ...
	I1205 20:57:16.786062   46866 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:16.840981   46866 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1205 20:57:16.842980   46866 out.go:177] * Done! kubectl is now configured to use "no-preload-143651" cluster and "default" namespace by default
	I1205 20:57:14.049305   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:14.549423   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.050061   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.550221   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.049450   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.550094   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.049900   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.549923   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.050255   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.549399   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.615362   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.62253521s)
	I1205 20:57:19.615425   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:19.633203   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:57:19.643629   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:57:19.653655   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:57:19.653717   46374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:57:19.709748   46374 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:57:19.709836   46374 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:19.887985   46374 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:19.888143   46374 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:19.888243   46374 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:20.145182   46374 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:20.147189   46374 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:20.147319   46374 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:20.147389   46374 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:20.147482   46374 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:20.147875   46374 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:20.148583   46374 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:20.149486   46374 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:20.150362   46374 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:20.150974   46374 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:20.151523   46374 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:20.152166   46374 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:20.152419   46374 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:20.152504   46374 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:20.435395   46374 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:20.606951   46374 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:20.754435   46374 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:20.953360   46374 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:20.954288   46374 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:20.958413   46374 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:19.049689   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.549608   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.049856   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.550245   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.050001   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.549839   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.049908   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.549764   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.050204   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.550196   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.049420   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.550152   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.050103   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.202067   46700 kubeadm.go:1088] duration metric: took 16.108268519s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:25.202100   46700 kubeadm.go:406] StartCluster complete in 5m53.142100786s
	I1205 20:57:25.202121   46700 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.202211   46700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:25.204920   46700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.205284   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:25.205635   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:57:25.205792   46700 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:25.205865   46700 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-061206"
	I1205 20:57:25.205888   46700 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-061206"
	W1205 20:57:25.205896   46700 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:25.205954   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.205982   46700 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206011   46700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-061206"
	I1205 20:57:25.206429   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206436   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206457   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206459   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206517   46700 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206531   46700 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-061206"
	W1205 20:57:25.206538   46700 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:25.206578   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.206906   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206936   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.228876   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I1205 20:57:25.228902   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1205 20:57:25.229036   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I1205 20:57:25.229487   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229569   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229646   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.230209   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230230   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230413   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230426   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230468   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230492   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230851   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.231494   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.231520   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.231955   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.232544   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.232578   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.233084   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.233307   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.237634   46700 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-061206"
	W1205 20:57:25.237660   46700 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:25.237691   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.238103   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.238138   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.252274   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I1205 20:57:25.252709   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.253307   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.253327   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.253689   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.253874   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.255891   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.258376   46700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:25.256849   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I1205 20:57:25.260119   46700 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.260145   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:25.260168   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.261358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.262042   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.262063   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.262590   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.262765   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.265705   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.265905   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.267942   46700 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:25.266347   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.266528   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.269653   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.269661   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:25.269687   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:25.269708   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.270383   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.270602   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.270764   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.274415   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.274914   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.274939   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.275267   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.275451   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.275594   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.275736   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.282847   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1205 20:57:25.283552   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.284174   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.284192   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.284659   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.285434   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.285469   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.306845   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1205 20:57:25.307358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.307884   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.307905   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.308302   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.308605   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.310363   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.310649   46700 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.310663   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:25.310682   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.313904   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314451   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.314482   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314756   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.314941   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.315053   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.315153   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.456874   46700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-061206" context rescaled to 1 replicas
	I1205 20:57:25.456922   46700 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:25.459008   46700 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:20.960444   46374 out.go:204]   - Booting up control plane ...
	I1205 20:57:20.960603   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:20.960721   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:20.961220   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:20.981073   46374 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:20.982383   46374 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:20.982504   46374 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:57:21.127167   46374 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:25.460495   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:25.531367   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.531600   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:25.531618   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:25.543589   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.624622   46700 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.624655   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:25.660979   46700 node_ready.go:49] node "old-k8s-version-061206" has status "Ready":"True"
	I1205 20:57:25.661005   46700 node_ready.go:38] duration metric: took 36.286483ms waiting for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.661017   46700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:25.666179   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:25.666208   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:25.796077   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:26.018114   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.018141   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:26.124357   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.905138   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.37373154s)
	I1205 20:57:26.905210   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905526   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905553   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.905567   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905576   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:26.905905   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905917   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964563   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.964593   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.964920   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.964940   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964974   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465231   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.92160273s)
	I1205 20:57:27.465236   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.840348969s)
	I1205 20:57:27.465312   46700 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:27.465289   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465379   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.465718   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465761   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.465771   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.465780   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465790   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.467788   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.467820   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.467829   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628166   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.503702639s)
	I1205 20:57:27.628242   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628262   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628592   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628617   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628627   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628637   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628714   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.628851   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628866   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628885   46700 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-061206"
	I1205 20:57:27.632134   46700 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:27.634065   46700 addons.go:502] enable addons completed in 2.428270131s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:28.052082   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:29.630980   46374 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503524 seconds
	I1205 20:57:29.631109   46374 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:29.651107   46374 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:30.184174   46374 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:30.184401   46374 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-331495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:57:30.703275   46374 kubeadm.go:322] [bootstrap-token] Using token: 28cbrl.nve3765a0enwbcr0
	I1205 20:57:30.705013   46374 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:30.705155   46374 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:30.718386   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:57:30.727275   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:30.734448   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:30.741266   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:30.746706   46374 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:30.765198   46374 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:57:31.046194   46374 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:31.133417   46374 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:31.133438   46374 kubeadm.go:322] 
	I1205 20:57:31.133501   46374 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:31.133509   46374 kubeadm.go:322] 
	I1205 20:57:31.133647   46374 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:31.133667   46374 kubeadm.go:322] 
	I1205 20:57:31.133707   46374 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:31.133781   46374 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:31.133853   46374 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:31.133863   46374 kubeadm.go:322] 
	I1205 20:57:31.133918   46374 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:57:31.133925   46374 kubeadm.go:322] 
	I1205 20:57:31.133983   46374 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:57:31.133993   46374 kubeadm.go:322] 
	I1205 20:57:31.134042   46374 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:31.134103   46374 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:31.134262   46374 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:31.134300   46374 kubeadm.go:322] 
	I1205 20:57:31.134417   46374 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:57:31.134526   46374 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:31.134541   46374 kubeadm.go:322] 
	I1205 20:57:31.134671   46374 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.134823   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:31.134858   46374 kubeadm.go:322] 	--control-plane 
	I1205 20:57:31.134867   46374 kubeadm.go:322] 
	I1205 20:57:31.134986   46374 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:31.134997   46374 kubeadm.go:322] 
	I1205 20:57:31.135114   46374 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.135272   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:31.135908   46374 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:57:31.135934   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:57:31.135944   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:31.137845   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:30.540402   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:33.040756   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:31.139429   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:31.181897   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:31.202833   46374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:31.202901   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.202910   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=embed-certs-331495 minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.298252   46374 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:31.569929   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.694250   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.294912   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.795323   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.295495   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.794998   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.294843   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.794730   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:35.295505   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.538542   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.538568   46700 pod_ready.go:81] duration metric: took 8.742457359s waiting for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.538579   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.540738   46700 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540763   46700 pod_ready.go:81] duration metric: took 2.177251ms waiting for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:34.540771   46700 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540777   46700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545336   46700 pod_ready.go:92] pod "kube-proxy-j68qr" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.545360   46700 pod_ready.go:81] duration metric: took 4.576584ms waiting for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545370   46700 pod_ready.go:38] duration metric: took 8.884340587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:34.545387   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:34.545442   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:34.561744   46700 api_server.go:72] duration metric: took 9.104792218s to wait for apiserver process to appear ...
	I1205 20:57:34.561769   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:34.561786   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:57:34.568456   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:57:34.569584   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:57:34.569608   46700 api_server.go:131] duration metric: took 7.832231ms to wait for apiserver health ...
	I1205 20:57:34.569618   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:34.573936   46700 system_pods.go:59] 4 kube-system pods found
	I1205 20:57:34.573962   46700 system_pods.go:61] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.573969   46700 system_pods.go:61] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.573979   46700 system_pods.go:61] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.573989   46700 system_pods.go:61] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.574004   46700 system_pods.go:74] duration metric: took 4.378461ms to wait for pod list to return data ...
	I1205 20:57:34.574016   46700 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:34.577236   46700 default_sa.go:45] found service account: "default"
	I1205 20:57:34.577258   46700 default_sa.go:55] duration metric: took 3.232577ms for default service account to be created ...
	I1205 20:57:34.577268   46700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:34.581061   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.581080   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.581086   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.581093   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.581098   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.581112   46700 retry.go:31] will retry after 312.287284ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:34.898504   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.898531   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.898536   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.898545   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.898549   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.898563   46700 retry.go:31] will retry after 340.858289ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.244211   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.244237   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.244242   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.244249   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.244253   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.244267   46700 retry.go:31] will retry after 398.30611ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.649011   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.649042   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.649050   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.649061   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.649068   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.649086   46700 retry.go:31] will retry after 397.404602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.052047   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.052079   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.052087   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.052097   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.052105   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.052124   46700 retry.go:31] will retry after 604.681853ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.662177   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.662206   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.662213   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.662223   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.662229   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.662247   46700 retry.go:31] will retry after 732.227215ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:37.399231   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:37.399264   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:37.399272   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:37.399282   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:37.399289   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:37.399308   46700 retry.go:31] will retry after 1.17612773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.795241   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.295081   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.795352   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.295506   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.794785   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.294797   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.794948   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.295478   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.795706   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:40.295444   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.581173   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:38.581201   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:38.581207   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:38.581220   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:38.581225   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:38.581239   46700 retry.go:31] will retry after 1.118915645s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:39.704807   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:39.704835   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:39.704841   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:39.704847   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:39.704854   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:39.704872   46700 retry.go:31] will retry after 1.49556329s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:41.205278   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:41.205316   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:41.205324   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:41.205331   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:41.205336   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:41.205357   46700 retry.go:31] will retry after 2.273757829s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:43.485079   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:43.485109   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:43.485125   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:43.485132   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:43.485137   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:43.485153   46700 retry.go:31] will retry after 2.2120181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:40.794725   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.295631   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.795542   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.295514   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.795481   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.295525   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.795463   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.295442   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.451570   46374 kubeadm.go:1088] duration metric: took 13.248732973s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:44.451605   46374 kubeadm.go:406] StartCluster complete in 5m13.096778797s
	I1205 20:57:44.451631   46374 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.451730   46374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:44.454306   46374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.454587   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:44.454611   46374 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:44.454695   46374 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-331495"
	I1205 20:57:44.454720   46374 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-331495"
	W1205 20:57:44.454731   46374 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:44.454766   46374 addons.go:69] Setting default-storageclass=true in profile "embed-certs-331495"
	I1205 20:57:44.454781   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.454783   46374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-331495"
	I1205 20:57:44.454840   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:57:44.454884   46374 addons.go:69] Setting metrics-server=true in profile "embed-certs-331495"
	I1205 20:57:44.454899   46374 addons.go:231] Setting addon metrics-server=true in "embed-certs-331495"
	W1205 20:57:44.454907   46374 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:44.454949   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.455191   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455213   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455216   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455231   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455237   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455259   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.473063   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I1205 20:57:44.473083   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I1205 20:57:44.473135   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I1205 20:57:44.473509   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.473642   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474153   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474171   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474179   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474197   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474336   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474566   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474637   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474761   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474785   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474877   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.475234   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475260   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.475295   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.475833   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475871   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.478828   46374 addons.go:231] Setting addon default-storageclass=true in "embed-certs-331495"
	W1205 20:57:44.478852   46374 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:44.478882   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.479277   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.479311   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.493193   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1205 20:57:44.493380   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1205 20:57:44.493637   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.493775   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.494092   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494108   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494242   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494252   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494488   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494624   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.494682   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.496908   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.497156   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.498954   46374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:44.500583   46374 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:44.499205   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1205 20:57:44.502186   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:44.502199   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:44.502214   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.502313   46374 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.502329   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:44.502349   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.503728   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.504065   46374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-331495" context rescaled to 1 replicas
	I1205 20:57:44.504105   46374 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:44.505773   46374 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:44.507622   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:44.505350   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.507719   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.505638   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.507792   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.507821   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.506710   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.507399   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508237   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.508287   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508353   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.508369   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508440   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.508506   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.508671   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508678   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.508996   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.509016   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.509373   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.509567   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.525720   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I1205 20:57:44.526352   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.526817   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.526831   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.527096   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.527248   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.529415   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.529714   46374 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.529725   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:44.529737   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.532475   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533019   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.533042   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.533393   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.533527   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.533614   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.688130   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:44.688235   46374 node_ready.go:35] waiting up to 6m0s for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727420   46374 node_ready.go:49] node "embed-certs-331495" has status "Ready":"True"
	I1205 20:57:44.727442   46374 node_ready.go:38] duration metric: took 39.185885ms waiting for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727450   46374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:44.732130   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:44.732147   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:44.738201   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.771438   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.811415   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:44.811441   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:44.813276   46374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:44.891164   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:44.891188   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:44.982166   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:46.640482   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.952307207s)
	I1205 20:57:46.640514   46374 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:46.640492   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.902257941s)
	I1205 20:57:46.640549   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640567   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.640954   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.640974   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.640985   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640994   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.641299   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.641316   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.641317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669046   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.669072   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.669393   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669467   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.669486   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229043   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.457564146s)
	I1205 20:57:47.229106   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229122   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.229427   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.229442   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229451   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229460   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.230375   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:47.230383   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.230399   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.269645   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.287430037s)
	I1205 20:57:47.269701   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.269717   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270028   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270044   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270053   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.270062   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270370   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270387   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270397   46374 addons.go:467] Verifying addon metrics-server=true in "embed-certs-331495"
	I1205 20:57:47.272963   46374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:45.704352   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:45.704382   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:45.704392   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:45.704402   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:45.704408   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:45.704427   46700 retry.go:31] will retry after 3.581529213s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:47.274340   46374 addons.go:502] enable addons completed in 2.819728831s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:47.280325   46374 pod_ready.go:102] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:48.746184   46374 pod_ready.go:92] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.746205   46374 pod_ready.go:81] duration metric: took 3.932903963s waiting for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.746212   46374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752060   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.752078   46374 pod_ready.go:81] duration metric: took 5.859638ms waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752088   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757347   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.757367   46374 pod_ready.go:81] duration metric: took 5.273527ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757375   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762850   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.762869   46374 pod_ready.go:81] duration metric: took 5.4878ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762876   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767874   46374 pod_ready.go:92] pod "kube-proxy-tbr8k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.767896   46374 pod_ready.go:81] duration metric: took 5.013139ms waiting for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767907   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141813   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:49.141836   46374 pod_ready.go:81] duration metric: took 373.922185ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141844   46374 pod_ready.go:38] duration metric: took 4.414384404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:49.141856   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:49.141898   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:49.156536   46374 api_server.go:72] duration metric: took 4.652397468s to wait for apiserver process to appear ...
	I1205 20:57:49.156566   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:49.156584   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:57:49.162837   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:57:49.164588   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:57:49.164606   46374 api_server.go:131] duration metric: took 8.03498ms to wait for apiserver health ...
	I1205 20:57:49.164613   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:49.346033   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:49.346065   46374 system_pods.go:61] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.346069   46374 system_pods.go:61] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.346074   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.346079   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.346082   46374 system_pods.go:61] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.346086   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.346092   46374 system_pods.go:61] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.346098   46374 system_pods.go:61] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:57:49.346105   46374 system_pods.go:74] duration metric: took 181.48718ms to wait for pod list to return data ...
	I1205 20:57:49.346111   46374 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:49.541758   46374 default_sa.go:45] found service account: "default"
	I1205 20:57:49.541783   46374 default_sa.go:55] duration metric: took 195.666774ms for default service account to be created ...
	I1205 20:57:49.541791   46374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:49.746101   46374 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:49.746131   46374 system_pods.go:89] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.746136   46374 system_pods.go:89] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.746142   46374 system_pods.go:89] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.746147   46374 system_pods.go:89] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.746150   46374 system_pods.go:89] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.746155   46374 system_pods.go:89] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.746170   46374 system_pods.go:89] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.746175   46374 system_pods.go:89] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Running
	I1205 20:57:49.746183   46374 system_pods.go:126] duration metric: took 204.388635ms to wait for k8s-apps to be running ...
	I1205 20:57:49.746193   46374 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:49.746241   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:49.764758   46374 system_svc.go:56] duration metric: took 18.554759ms WaitForService to wait for kubelet.
	I1205 20:57:49.764784   46374 kubeadm.go:581] duration metric: took 5.260652386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:49.764801   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:49.942067   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:49.942095   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:49.942105   46374 node_conditions.go:105] duration metric: took 177.300297ms to run NodePressure ...
	I1205 20:57:49.942114   46374 start.go:228] waiting for startup goroutines ...
	I1205 20:57:49.942120   46374 start.go:233] waiting for cluster config update ...
	I1205 20:57:49.942129   46374 start.go:242] writing updated cluster config ...
	I1205 20:57:49.942407   46374 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:49.995837   46374 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:57:49.997691   46374 out.go:177] * Done! kubectl is now configured to use "embed-certs-331495" cluster and "default" namespace by default
	I1205 20:57:49.291672   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:49.291700   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:49.291705   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:49.291713   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.291718   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:49.291736   46700 retry.go:31] will retry after 3.015806566s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:52.313677   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:52.313703   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:52.313711   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:52.313721   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:52.313727   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:52.313747   46700 retry.go:31] will retry after 4.481475932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:56.804282   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:56.804308   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:56.804314   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:56.804321   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:56.804325   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:56.804340   46700 retry.go:31] will retry after 6.744179014s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:03.556623   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:58:03.556652   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:03.556660   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:03.556669   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:03.556676   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:03.556696   46700 retry.go:31] will retry after 7.974872066s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:11.540488   46700 system_pods.go:86] 6 kube-system pods found
	I1205 20:58:11.540516   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:11.540522   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Pending
	I1205 20:58:11.540526   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Pending
	I1205 20:58:11.540530   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:11.540537   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:11.540541   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:11.540556   46700 retry.go:31] will retry after 10.29278609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:21.841415   46700 system_pods.go:86] 7 kube-system pods found
	I1205 20:58:21.841442   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:21.841450   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:21.841457   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:21.841463   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:21.841468   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:21.841478   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:21.841485   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:21.841503   46700 retry.go:31] will retry after 10.997616244s: missing components: kube-scheduler
	I1205 20:58:32.846965   46700 system_pods.go:86] 8 kube-system pods found
	I1205 20:58:32.846999   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:32.847007   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:32.847016   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:32.847023   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:32.847028   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:32.847032   46700 system_pods.go:89] "kube-scheduler-old-k8s-version-061206" [e19a40ac-ac9b-4dc8-8ed3-c13da266bb88] Running
	I1205 20:58:32.847041   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:32.847049   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:32.847061   46700 system_pods.go:126] duration metric: took 58.26978612s to wait for k8s-apps to be running ...
	I1205 20:58:32.847074   46700 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:58:32.847122   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:58:32.866233   46700 system_svc.go:56] duration metric: took 19.150294ms WaitForService to wait for kubelet.
	I1205 20:58:32.866267   46700 kubeadm.go:581] duration metric: took 1m7.409317219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:58:32.866308   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:58:32.870543   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:58:32.870569   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:58:32.870581   46700 node_conditions.go:105] duration metric: took 4.266682ms to run NodePressure ...
	I1205 20:58:32.870604   46700 start.go:228] waiting for startup goroutines ...
	I1205 20:58:32.870630   46700 start.go:233] waiting for cluster config update ...
	I1205 20:58:32.870646   46700 start.go:242] writing updated cluster config ...
	I1205 20:58:32.870888   46700 ssh_runner.go:195] Run: rm -f paused
	I1205 20:58:32.922554   46700 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1205 20:58:32.924288   46700 out.go:177] 
	W1205 20:58:32.925788   46700 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1205 20:58:32.927148   46700 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1205 20:58:32.928730   46700 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-061206" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:51:14 UTC, ends at Tue 2023-12-05 21:07:34 UTC. --
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.660762778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1alpha2,}" file="go-grpc-middleware/chain.go:25" id=76178463-b6ce-46ad-ae42-35cfe5a70945 name=/runtime.v1alpha2.RuntimeService/Version
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.661788193Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=df009cb7-5399-4ee6-9e5a-669c124b3dc1 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.661973537Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4f40d5a209a3c62bdfb930e5af33656b757ad71b380226f4627ef832b960c4bf,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-jbxkl,Uid:ea6e50b4-4224-441e-878d-bff37f046528,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809848674161472,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-jbxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6e50b4-4224-441e-878d-bff37f046528,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:28.319988569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9e493874-629d-4446-b372-47fa158aea
4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809847819747975,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-05T20:57:27.471462234Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qm52j,Uid:19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809846203737481,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.475186936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&PodSandboxMetadata{Name:kube-proxy-j68qr,Uid:857e6815-cb4c-477d-af2
4-941a37f65f6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809845770031150,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.42285965Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-061206,Uid:42f4027feb4c207207ef36a204ac558e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817198887929,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207
207ef36a204ac558e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 42f4027feb4c207207ef36a204ac558e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645985869Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-061206,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817159423636,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-05T20:56:56.645982018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a55020c50209daa1d78e8a3b3c
68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-061206,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817154900515,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645968944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-061206,Uid:9a4cd076e0e3bb6062b3f80cd3aea422,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817150165984,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a4cd076e0e3bb6062b3f80cd3aea422,kubernetes.io/config.seen: 2023-12-05T20:56:56.645984155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=df009cb7-5399-4ee6-9e5a-669c124b3dc1 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.665816815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8ddad3f1-4714-4855-bba6-00327d4a8909 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.666073197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8ddad3f1-4714-4855-bba6-00327d4a8909 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.667447944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8ddad3f1-4714-4855-bba6-00327d4a8909 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.687057552Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=2b54a9e9-d079-4c5f-b596-89b094d02d45 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.687304952Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4f40d5a209a3c62bdfb930e5af33656b757ad71b380226f4627ef832b960c4bf,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-jbxkl,Uid:ea6e50b4-4224-441e-878d-bff37f046528,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809848674161472,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-jbxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6e50b4-4224-441e-878d-bff37f046528,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:28.319988569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9e493874-629d-4446-b372-47fa158aea
4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809847819747975,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-05T20:57:27.471462234Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qm52j,Uid:19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809846203737481,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.475186936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&PodSandboxMetadata{Name:kube-proxy-j68qr,Uid:857e6815-cb4c-477d-af2
4-941a37f65f6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809845770031150,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.42285965Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-061206,Uid:42f4027feb4c207207ef36a204ac558e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817198887929,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207
207ef36a204ac558e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 42f4027feb4c207207ef36a204ac558e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645985869Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-061206,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817159423636,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-05T20:56:56.645982018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a55020c50209daa1d78e8a3b3c
68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-061206,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817154900515,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645968944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-061206,Uid:9a4cd076e0e3bb6062b3f80cd3aea422,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817150165984,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a4cd076e0e3bb6062b3f80cd3aea422,kubernetes.io/config.seen: 2023-12-05T20:56:56.645984155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=2b54a9e9-d079-4c5f-b596-89b094d02d45 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.688109477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0f65b614-8f34-47ed-9330-0877dbcaa73a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.688159767Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0f65b614-8f34-47ed-9330-0877dbcaa73a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.688317088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0f65b614-8f34-47ed-9330-0877dbcaa73a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.701421546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e6e96a70-70c9-4f16-bd3a-b8001dc2309b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.701503858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e6e96a70-70c9-4f16-bd3a-b8001dc2309b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.703020731Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b50aa3a8-6ffa-4b37-b8f7-f1469fbbf0fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.703784689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810454703770861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b50aa3a8-6ffa-4b37-b8f7-f1469fbbf0fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.704364689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=71085c7b-8d18-4f72-9269-4e72d16f59fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.704455169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=71085c7b-8d18-4f72-9269-4e72d16f59fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.704695991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=71085c7b-8d18-4f72-9269-4e72d16f59fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.741694360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=08b11df2-c3da-40f1-8360-84ba8e628a1e name=/runtime.v1.RuntimeService/Version
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.741842639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=08b11df2-c3da-40f1-8360-84ba8e628a1e name=/runtime.v1.RuntimeService/Version
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.743740347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9df479b2-b1f3-4176-bc98-28fa37aa5034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.744166327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810454744153285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=9df479b2-b1f3-4176-bc98-28fa37aa5034 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.744854545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8775fa8f-c097-438e-9329-a6b25b821e72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.744939362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8775fa8f-c097-438e-9329-a6b25b821e72 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:07:34 old-k8s-version-061206 crio[708]: time="2023-12-05 21:07:34.745090446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8775fa8f-c097-438e-9329-a6b25b821e72 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68ef4ccaf4b56       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   5761d98d74764       storage-provisioner
	a508a24c599b8       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   c47670f9603a7       kube-proxy-j68qr
	c55c7658d0763       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   47f738c52328d       coredns-5644d7b6d9-qm52j
	aeedb4418156e       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   877976c08d2da       etcd-old-k8s-version-061206
	d6f271a2baa00       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   151640bbfafca       kube-scheduler-old-k8s-version-061206
	aa2ee8e9a505e       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   a55020c50209d       kube-controller-manager-old-k8s-version-061206
	25d5772a51c5b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   1f1f86ca5bcbb       kube-apiserver-old-k8s-version-061206
	
	* 
	* ==> coredns [c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1] <==
	* .:53
	2023-12-05T20:57:27.313Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-12-05T20:57:27.313Z [INFO] CoreDNS-1.6.2
	2023-12-05T20:57:27.313Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-12-05T20:57:57.150Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-12-05T20:57:57.159Z [INFO] 127.0.0.1:60377 - 47643 "HINFO IN 8905990356429537435.7948724483024421708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008839679s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-061206
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-061206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=old-k8s-version-061206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:57:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:07:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:07:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:07:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:07:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.116
	  Hostname:    old-k8s-version-061206
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 490ff4de3cc346cbadefc512ca4ba833
	 System UUID:                490ff4de-3cc3-46cb-adef-c512ca4ba833
	 Boot ID:                    6369e2b2-de47-44a7-be57-652fcb308eee
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qm52j                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-061206                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                kube-apiserver-old-k8s-version-061206             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                kube-controller-manager-old-k8s-version-061206    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  kube-system                kube-proxy-j68qr                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-061206             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                metrics-server-74d5856cc6-jbxkl                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-061206     Node old-k8s-version-061206 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-061206     Node old-k8s-version-061206 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-061206     Node old-k8s-version-061206 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-061206  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066822] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.377184] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.456566] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153272] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.490349] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.412969] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.122033] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.155842] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.129914] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.243242] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +20.162291] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +0.500222] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.991973] kauditd_printk_skb: 3 callbacks suppressed
	[Dec 5 20:52] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.246172] hrtimer: interrupt took 4422961 ns
	[Dec 5 20:56] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +1.269085] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 20:57] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f] <==
	* 2023-12-05 20:56:59.527122 I | raft: 70e810c2542c58a7 became follower at term 1
	2023-12-05 20:56:59.537329 W | auth: simple token is not cryptographically signed
	2023-12-05 20:56:59.543224 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-05 20:56:59.545066 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-05 20:56:59.545234 I | embed: listening for metrics on http://192.168.50.116:2381
	2023-12-05 20:56:59.545498 I | etcdserver: 70e810c2542c58a7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-05 20:56:59.546168 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-05 20:56:59.546402 I | etcdserver/membership: added member 70e810c2542c58a7 [https://192.168.50.116:2380] to cluster 938c7bbb9c530c74
	2023-12-05 20:56:59.927823 I | raft: 70e810c2542c58a7 is starting a new election at term 1
	2023-12-05 20:56:59.927981 I | raft: 70e810c2542c58a7 became candidate at term 2
	2023-12-05 20:56:59.928176 I | raft: 70e810c2542c58a7 received MsgVoteResp from 70e810c2542c58a7 at term 2
	2023-12-05 20:56:59.928309 I | raft: 70e810c2542c58a7 became leader at term 2
	2023-12-05 20:56:59.928458 I | raft: raft.node: 70e810c2542c58a7 elected leader 70e810c2542c58a7 at term 2
	2023-12-05 20:56:59.928840 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-05 20:56:59.930841 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-05 20:56:59.930926 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-05 20:56:59.930954 I | etcdserver: published {Name:old-k8s-version-061206 ClientURLs:[https://192.168.50.116:2379]} to cluster 938c7bbb9c530c74
	2023-12-05 20:56:59.930999 I | embed: ready to serve client requests
	2023-12-05 20:56:59.931207 I | embed: ready to serve client requests
	2023-12-05 20:56:59.932986 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-05 20:56:59.935124 I | embed: serving client requests on 192.168.50.116:2379
	2023-12-05 20:57:25.583611 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-j68qr\" " with result "range_response_count:1 size:1746" took too long (140.247659ms) to execute
	2023-12-05 20:57:25.769052 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:3 size:4840" took too long (100.291551ms) to execute
	2023-12-05 21:07:00.979045 I | mvcc: store.index: compact 670
	2023-12-05 21:07:00.982051 I | mvcc: finished scheduled compaction at 670 (took 2.217632ms)
	
	* 
	* ==> kernel <==
	*  21:07:35 up 16 min,  0 users,  load average: 0.17, 0.24, 0.27
	Linux old-k8s-version-061206 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac] <==
	* I1205 21:00:29.074468       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:00:29.075194       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:00:29.075378       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:00:29.075430       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:02:05.248897       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:02:05.249293       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:02:05.249473       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:02:05.249522       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:03:05.250038       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:03:05.250143       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:03:05.250182       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:03:05.250194       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:05:05.250701       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:05:05.251184       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:05:05.251340       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:05:05.251391       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:07:05.252411       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:07:05.252839       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:07:05.252922       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:07:05.252945       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a] <==
	* E1205 21:01:27.242594       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:01:41.329515       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:01:57.494336       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:02:13.332698       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:02:27.746424       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:02:45.335640       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:02:57.998950       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:03:17.337879       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:03:28.251190       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:03:49.340093       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:03:58.503427       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:04:21.342357       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:04:28.755774       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:04:53.344733       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:04:59.007857       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:05:25.347133       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:05:29.259901       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:05:57.350247       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:05:59.513316       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:06:29.352713       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:06:29.765959       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1205 21:07:00.019191       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:07:01.355111       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:07:30.271239       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:07:33.357959       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c] <==
	* W1205 20:57:28.214900       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1205 20:57:28.228785       1 node.go:135] Successfully retrieved node IP: 192.168.50.116
	I1205 20:57:28.228876       1 server_others.go:149] Using iptables Proxier.
	I1205 20:57:28.231870       1 server.go:529] Version: v1.16.0
	I1205 20:57:28.234399       1 config.go:313] Starting service config controller
	I1205 20:57:28.234460       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1205 20:57:28.234504       1 config.go:131] Starting endpoints config controller
	I1205 20:57:28.234611       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1205 20:57:28.338021       1 shared_informer.go:204] Caches are synced for service config 
	I1205 20:57:28.339066       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117] <==
	* W1205 20:57:04.259050       1 authentication.go:79] Authentication is disabled
	I1205 20:57:04.259073       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1205 20:57:04.259418       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1205 20:57:04.298130       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:57:04.302985       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:57:04.303262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:57:04.303365       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:57:04.304204       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:57:04.304329       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:04.309111       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:04.309328       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:57:04.309406       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:57:04.309466       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:57:04.312787       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:57:05.300066       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:57:05.306058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:57:05.311327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:57:05.314448       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:57:05.316047       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:57:05.318331       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:05.320953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:05.323778       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:57:05.323932       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:57:05.326249       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:57:05.326367       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:51:14 UTC, ends at Tue 2023-12-05 21:07:35 UTC. --
	Dec 05 21:03:09 old-k8s-version-061206 kubelet[3199]: E1205 21:03:09.668374    3199 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:03:09 old-k8s-version-061206 kubelet[3199]: E1205 21:03:09.668479    3199 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:03:09 old-k8s-version-061206 kubelet[3199]: E1205 21:03:09.668595    3199 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:03:09 old-k8s-version-061206 kubelet[3199]: E1205 21:03:09.668628    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 05 21:03:24 old-k8s-version-061206 kubelet[3199]: E1205 21:03:24.661233    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:03:35 old-k8s-version-061206 kubelet[3199]: E1205 21:03:35.658715    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:03:49 old-k8s-version-061206 kubelet[3199]: E1205 21:03:49.658056    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:04:02 old-k8s-version-061206 kubelet[3199]: E1205 21:04:02.657869    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:04:17 old-k8s-version-061206 kubelet[3199]: E1205 21:04:17.658075    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:04:31 old-k8s-version-061206 kubelet[3199]: E1205 21:04:31.657907    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:04:44 old-k8s-version-061206 kubelet[3199]: E1205 21:04:44.658949    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:04:55 old-k8s-version-061206 kubelet[3199]: E1205 21:04:55.657602    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:05:07 old-k8s-version-061206 kubelet[3199]: E1205 21:05:07.657902    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:05:18 old-k8s-version-061206 kubelet[3199]: E1205 21:05:18.658092    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:05:31 old-k8s-version-061206 kubelet[3199]: E1205 21:05:31.658586    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:05:42 old-k8s-version-061206 kubelet[3199]: E1205 21:05:42.658115    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:05:53 old-k8s-version-061206 kubelet[3199]: E1205 21:05:53.657931    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:06:08 old-k8s-version-061206 kubelet[3199]: E1205 21:06:08.658143    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:06:22 old-k8s-version-061206 kubelet[3199]: E1205 21:06:22.658089    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:06:33 old-k8s-version-061206 kubelet[3199]: E1205 21:06:33.657730    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:06:48 old-k8s-version-061206 kubelet[3199]: E1205 21:06:48.658943    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:06:56 old-k8s-version-061206 kubelet[3199]: E1205 21:06:56.758369    3199 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 05 21:07:02 old-k8s-version-061206 kubelet[3199]: E1205 21:07:02.658356    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:07:16 old-k8s-version-061206 kubelet[3199]: E1205 21:07:16.658753    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:07:27 old-k8s-version-061206 kubelet[3199]: E1205 21:07:27.658429    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61] <==
	* I1205 20:57:28.741057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:57:28.755001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:57:28.756103       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:57:28.765065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:57:28.766429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061206_03953d21-9e9c-494b-846f-6389df00f948!
	I1205 20:57:28.766704       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1d7830a-c358-4e1a-91a1-982b7108f3e1", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-061206_03953d21-9e9c-494b-846f-6389df00f948 became leader
	I1205 20:57:28.870168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061206_03953d21-9e9c-494b-846f-6389df00f948!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061206 -n old-k8s-version-061206
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-061206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-jbxkl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-061206 describe pod metrics-server-74d5856cc6-jbxkl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-061206 describe pod metrics-server-74d5856cc6-jbxkl: exit status 1 (74.728533ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-jbxkl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-061206 describe pod metrics-server-74d5856cc6-jbxkl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:15:00.314948623 +0000 UTC m=+6013.127408256
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-463614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (82.737831ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-463614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-463614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-463614 logs -n 25: (1.468784491s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /etc/kubernetes/kubelet.conf                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /var/lib/kubelet/config.yaml                           |                       |         |         |                     |                     |
	| start   | -p calico-855101 --memory=3072                         | calico-855101         | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                       |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                       |         |         |                     |                     |
	|         | --container-runtime=crio                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | systemctl status docker --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                      |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | systemctl cat docker                                   |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /etc/docker/daemon.json                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo docker                          | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | system info                                            |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | systemctl status cri-docker                            |                       |         |         |                     |                     |
	|         | --all --full --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | systemctl cat cri-docker                               |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | cri-dockerd --version                                  |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | systemctl status containerd                            |                       |         |         |                     |                     |
	|         | --all --full --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | systemctl cat containerd                               |                       |         |         |                     |                     |
	|         | --no-pager                                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo cat                             | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /etc/containerd/config.toml                            |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | containerd config dump                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | systemctl status crio --all                            |                       |         |         |                     |                     |
	|         | --full --no-pager                                      |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo                                 | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | systemctl cat crio --no-pager                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo find                            | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-855101 sudo crio                            | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	|         | config                                                 |                       |         |         |                     |                     |
	| delete  | -p kindnet-855101                                      | kindnet-855101        | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC | 05 Dec 23 21:13 UTC |
	| start   | -p custom-flannel-855101                               | custom-flannel-855101 | jenkins | v1.32.0 | 05 Dec 23 21:13 UTC |                     |
	|         | --memory=3072 --alsologtostderr                        |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                         |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                       |                       |         |         |                     |                     |
	|         | --driver=kvm2                                          |                       |         |         |                     |                     |
	|         | --container-runtime=crio                               |                       |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-051721                  | newest-cni-051721     | jenkins | v1.32.0 | 05 Dec 23 21:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                       |         |         |                     |                     |
	| start   | -p newest-cni-051721 --memory=2200 --alsologtostderr   | newest-cni-051721     | jenkins | v1.32.0 | 05 Dec 23 21:14 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                       |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                       |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                       |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                       |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                       |         |         |                     |                     |
	|---------|--------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 21:14:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:14:42.844986   56787 out.go:296] Setting OutFile to fd 1 ...
	I1205 21:14:42.845217   56787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 21:14:42.845231   56787 out.go:309] Setting ErrFile to fd 2...
	I1205 21:14:42.845238   56787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 21:14:42.845560   56787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 21:14:42.846355   56787 out.go:303] Setting JSON to false
	I1205 21:14:42.847751   56787 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7036,"bootTime":1701803847,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:14:42.847839   56787 start.go:138] virtualization: kvm guest
	I1205 21:14:42.850454   56787 out.go:177] * [newest-cni-051721] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:14:42.852003   56787 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 21:14:42.852001   56787 notify.go:220] Checking for updates...
	I1205 21:14:42.853719   56787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:14:42.855375   56787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 21:14:42.857008   56787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 21:14:42.858681   56787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:14:42.860249   56787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:14:40.706911   56344 provision.go:172] copyRemoteCerts
	I1205 21:14:40.706970   56344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:14:40.706993   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHHostname
	I1205 21:14:40.709519   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:40.709804   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:40.709843   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:40.709999   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHPort
	I1205 21:14:40.710230   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:40.710390   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHUsername
	I1205 21:14:40.710511   56344 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/custom-flannel-855101/id_rsa Username:docker}
	I1205 21:14:40.798293   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 21:14:40.826662   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1205 21:14:40.852124   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:14:40.877969   56344 provision.go:86] duration metric: configureAuth took 298.161611ms
	I1205 21:14:40.878000   56344 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:14:40.878230   56344 config.go:182] Loaded profile config "custom-flannel-855101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 21:14:40.878344   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHHostname
	I1205 21:14:40.880993   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:40.881366   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:40.881406   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:40.881609   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHPort
	I1205 21:14:40.881802   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:40.881959   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:40.882099   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHUsername
	I1205 21:14:40.882261   56344 main.go:141] libmachine: Using SSH client type: native
	I1205 21:14:40.882718   56344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1205 21:14:40.882739   56344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:14:41.201697   56344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:14:41.201727   56344 main.go:141] libmachine: Checking connection to Docker...
	I1205 21:14:41.201737   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetURL
	I1205 21:14:41.203156   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | Using libvirt version 6000000
	I1205 21:14:41.205543   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.205951   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.205984   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.206169   56344 main.go:141] libmachine: Docker is up and running!
	I1205 21:14:41.206246   56344 main.go:141] libmachine: Reticulating splines...
	I1205 21:14:41.206291   56344 client.go:171] LocalClient.Create took 28.334535183s
	I1205 21:14:41.206325   56344 start.go:167] duration metric: libmachine.API.Create for "custom-flannel-855101" took 28.33462071s
	I1205 21:14:41.206339   56344 start.go:300] post-start starting for "custom-flannel-855101" (driver="kvm2")
	I1205 21:14:41.206357   56344 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:14:41.206400   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .DriverName
	I1205 21:14:41.206668   56344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:14:41.206694   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHHostname
	I1205 21:14:41.208794   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.209166   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.209200   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.209407   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHPort
	I1205 21:14:41.209598   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:41.209791   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHUsername
	I1205 21:14:41.209922   56344 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/custom-flannel-855101/id_rsa Username:docker}
	I1205 21:14:41.300403   56344 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:14:41.305257   56344 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 21:14:41.305284   56344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 21:14:41.305342   56344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 21:14:41.305436   56344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 21:14:41.305546   56344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:14:41.318246   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 21:14:41.343415   56344 start.go:303] post-start completed in 137.0567ms
	I1205 21:14:41.343458   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetConfigRaw
	I1205 21:14:41.344198   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetIP
	I1205 21:14:41.347297   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.347738   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.347769   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.348043   56344 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/config.json ...
	I1205 21:14:41.348388   56344 start.go:128] duration metric: createHost completed in 28.497060568s
	I1205 21:14:41.348414   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHHostname
	I1205 21:14:41.350861   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.351229   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.351256   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.351381   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHPort
	I1205 21:14:41.351568   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:41.351720   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:41.351875   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHUsername
	I1205 21:14:41.352042   56344 main.go:141] libmachine: Using SSH client type: native
	I1205 21:14:41.352391   56344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.65 22 <nil> <nil>}
	I1205 21:14:41.352405   56344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 21:14:41.467503   56344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701810881.452318128
	
	I1205 21:14:41.467526   56344 fix.go:206] guest clock: 1701810881.452318128
	I1205 21:14:41.467536   56344 fix.go:219] Guest: 2023-12-05 21:14:41.452318128 +0000 UTC Remote: 2023-12-05 21:14:41.348402128 +0000 UTC m=+50.726999295 (delta=103.916ms)
	I1205 21:14:41.467559   56344 fix.go:190] guest clock delta is within tolerance: 103.916ms
	I1205 21:14:41.467569   56344 start.go:83] releasing machines lock for "custom-flannel-855101", held for 28.616407438s
	I1205 21:14:41.467595   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .DriverName
	I1205 21:14:41.467904   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetIP
	I1205 21:14:41.470639   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.471015   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.471045   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.471218   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .DriverName
	I1205 21:14:41.471701   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .DriverName
	I1205 21:14:41.471850   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .DriverName
	I1205 21:14:41.471917   56344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:14:41.471955   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHHostname
	I1205 21:14:41.472051   56344 ssh_runner.go:195] Run: cat /version.json
	I1205 21:14:41.472071   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHHostname
	I1205 21:14:41.474865   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.475018   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.475263   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.475293   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.475320   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:41.475335   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:41.475442   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHPort
	I1205 21:14:41.475602   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHPort
	I1205 21:14:41.475618   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:41.475871   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHKeyPath
	I1205 21:14:41.475887   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHUsername
	I1205 21:14:41.476027   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetSSHUsername
	I1205 21:14:41.476022   56344 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/custom-flannel-855101/id_rsa Username:docker}
	I1205 21:14:41.476165   56344 sshutil.go:53] new ssh client: &{IP:192.168.72.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/custom-flannel-855101/id_rsa Username:docker}
	I1205 21:14:41.564022   56344 ssh_runner.go:195] Run: systemctl --version
	I1205 21:14:41.587161   56344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:14:41.747060   56344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:14:41.753268   56344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:14:41.753349   56344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:14:41.769007   56344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:14:41.769032   56344 start.go:475] detecting cgroup driver to use...
	I1205 21:14:41.769101   56344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:14:41.789272   56344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:14:41.801788   56344 docker.go:203] disabling cri-docker service (if available) ...
	I1205 21:14:41.801844   56344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:14:41.814740   56344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:14:41.828137   56344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:14:41.933354   56344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:14:42.058176   56344 docker.go:219] disabling docker service ...
	I1205 21:14:42.058249   56344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:14:42.071312   56344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:14:42.083053   56344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:14:42.184896   56344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:14:42.284539   56344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:14:42.297139   56344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:14:42.315935   56344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 21:14:42.315990   56344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:14:42.326765   56344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:14:42.326837   56344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:14:42.336534   56344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:14:42.345982   56344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:14:42.358516   56344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:14:42.369495   56344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:14:42.379644   56344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:14:42.379701   56344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:14:42.394539   56344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:14:42.405059   56344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:14:42.506505   56344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:14:42.700026   56344 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:14:42.700093   56344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:14:42.709840   56344 start.go:543] Will wait 60s for crictl version
	I1205 21:14:42.709921   56344 ssh_runner.go:195] Run: which crictl
	I1205 21:14:42.713939   56344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:14:42.749606   56344 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 21:14:42.749696   56344 ssh_runner.go:195] Run: crio --version
	I1205 21:14:42.805522   56344 ssh_runner.go:195] Run: crio --version
	I1205 21:14:42.867506   56344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 21:14:42.862257   56787 config.go:182] Loaded profile config "newest-cni-051721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 21:14:42.862877   56787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:42.862934   56787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:42.880830   56787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I1205 21:14:42.881250   56787 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:42.881831   56787 main.go:141] libmachine: Using API Version  1
	I1205 21:14:42.881855   56787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:42.882211   56787 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:42.882421   56787 main.go:141] libmachine: (newest-cni-051721) Calling .DriverName
	I1205 21:14:42.882660   56787 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 21:14:42.882942   56787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:42.882982   56787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:42.901317   56787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1205 21:14:42.901754   56787 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:42.902314   56787 main.go:141] libmachine: Using API Version  1
	I1205 21:14:42.902344   56787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:42.902725   56787 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:42.902992   56787 main.go:141] libmachine: (newest-cni-051721) Calling .DriverName
	I1205 21:14:42.946629   56787 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:14:42.948290   56787 start.go:298] selected driver: kvm2
	I1205 21:14:42.948309   56787 start.go:902] validating driver "kvm2" against &{Name:newest-cni-051721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-051721 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.252 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 21:14:42.948448   56787 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:14:42.949245   56787 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:14:42.949326   56787 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:14:42.964055   56787 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 21:14:42.964458   56787 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 21:14:42.964528   56787 cni.go:84] Creating CNI manager for ""
	I1205 21:14:42.964547   56787 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:14:42.964563   56787 start_flags.go:323] config:
	{Name:newest-cni-051721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-051721 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.252 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 21:14:42.964740   56787 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:14:42.966581   56787 out.go:177] * Starting control plane node newest-cni-051721 in cluster newest-cni-051721
	I1205 21:14:42.967989   56787 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 21:14:42.968036   56787 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1205 21:14:42.968050   56787 cache.go:56] Caching tarball of preloaded images
	I1205 21:14:42.968128   56787 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:14:42.968139   56787 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1205 21:14:42.968243   56787 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/newest-cni-051721/config.json ...
	I1205 21:14:42.968449   56787 start.go:365] acquiring machines lock for newest-cni-051721: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:14:42.968499   56787 start.go:369] acquired machines lock for "newest-cni-051721" in 29.925µs
	I1205 21:14:42.968519   56787 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:14:42.968528   56787 fix.go:54] fixHost starting: 
	I1205 21:14:42.968787   56787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:42.968832   56787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:42.984845   56787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45753
	I1205 21:14:42.985393   56787 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:42.986113   56787 main.go:141] libmachine: Using API Version  1
	I1205 21:14:42.986136   56787 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:42.986542   56787 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:42.987480   56787 main.go:141] libmachine: (newest-cni-051721) Calling .DriverName
	I1205 21:14:42.987903   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetState
	I1205 21:14:42.989826   56787 fix.go:102] recreateIfNeeded on newest-cni-051721: state=Running err=<nil>
	W1205 21:14:42.989849   56787 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 21:14:42.991761   56787 out.go:177] * Updating the running kvm2 "newest-cni-051721" VM ...
	I1205 21:14:40.263452   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:40.763015   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:41.263425   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:41.763739   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:42.263503   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:42.762994   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:43.263021   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:43.763981   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:44.263001   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:44.764075   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:42.868964   56344 main.go:141] libmachine: (custom-flannel-855101) Calling .GetIP
	I1205 21:14:42.872261   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:42.872688   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:4b:7c", ip: ""} in network mk-custom-flannel-855101: {Iface:virbr4 ExpiryTime:2023-12-05 22:14:30 +0000 UTC Type:0 Mac:52:54:00:f9:4b:7c Iaid: IPaddr:192.168.72.65 Prefix:24 Hostname:custom-flannel-855101 Clientid:01:52:54:00:f9:4b:7c}
	I1205 21:14:42.872715   56344 main.go:141] libmachine: (custom-flannel-855101) DBG | domain custom-flannel-855101 has defined IP address 192.168.72.65 and MAC address 52:54:00:f9:4b:7c in network mk-custom-flannel-855101
	I1205 21:14:42.872957   56344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:14:42.878068   56344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:14:42.890455   56344 localpath.go:92] copying /home/jenkins/minikube-integration/17731-6237/.minikube/client.crt -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt
	I1205 21:14:42.890605   56344 localpath.go:117] copying /home/jenkins/minikube-integration/17731-6237/.minikube/client.key -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.key
	I1205 21:14:42.890724   56344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 21:14:42.890777   56344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:14:42.935412   56344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 21:14:42.935484   56344 ssh_runner.go:195] Run: which lz4
	I1205 21:14:42.939532   56344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 21:14:42.944168   56344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:14:42.944198   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 21:14:44.883697   56344 crio.go:444] Took 1.944203 seconds to copy over tarball
	I1205 21:14:44.883793   56344 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:14:45.263368   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:45.763851   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:46.263062   55608 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:14:46.530699   55608 kubeadm.go:1088] duration metric: took 10.618453422s to wait for elevateKubeSystemPrivileges.
	I1205 21:14:46.530751   55608 kubeadm.go:406] StartCluster complete in 25.714422648s
	I1205 21:14:46.530774   55608 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:46.530848   55608 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 21:14:46.531996   55608 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:46.533641   55608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 21:14:46.533886   55608 config.go:182] Loaded profile config "calico-855101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 21:14:46.533935   55608 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 21:14:46.533986   55608 addons.go:69] Setting storage-provisioner=true in profile "calico-855101"
	I1205 21:14:46.534002   55608 addons.go:231] Setting addon storage-provisioner=true in "calico-855101"
	I1205 21:14:46.534051   55608 host.go:66] Checking if "calico-855101" exists ...
	I1205 21:14:46.534316   55608 addons.go:69] Setting default-storageclass=true in profile "calico-855101"
	I1205 21:14:46.534338   55608 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-855101"
	I1205 21:14:46.534480   55608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:46.534532   55608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:46.534781   55608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:46.534822   55608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:46.555131   55608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I1205 21:14:46.555177   55608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I1205 21:14:46.555662   55608 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:46.555719   55608 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:46.556153   55608 main.go:141] libmachine: Using API Version  1
	I1205 21:14:46.556175   55608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:46.556570   55608 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:46.556874   55608 main.go:141] libmachine: Using API Version  1
	I1205 21:14:46.556891   55608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:46.557330   55608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:46.557349   55608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:46.557683   55608 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:46.557858   55608 main.go:141] libmachine: (calico-855101) Calling .GetState
	I1205 21:14:46.561896   55608 addons.go:231] Setting addon default-storageclass=true in "calico-855101"
	I1205 21:14:46.561943   55608 host.go:66] Checking if "calico-855101" exists ...
	I1205 21:14:46.562367   55608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:46.562400   55608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:46.578507   55608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I1205 21:14:46.578921   55608 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:46.579452   55608 main.go:141] libmachine: Using API Version  1
	I1205 21:14:46.579471   55608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:46.579945   55608 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:46.580262   55608 main.go:141] libmachine: (calico-855101) Calling .GetState
	I1205 21:14:46.584809   55608 main.go:141] libmachine: (calico-855101) Calling .DriverName
	I1205 21:14:46.584890   55608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I1205 21:14:46.586857   55608 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:14:46.585623   55608 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:46.590562   55608 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:14:46.590576   55608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:14:46.590598   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHHostname
	I1205 21:14:46.591304   55608 main.go:141] libmachine: Using API Version  1
	I1205 21:14:46.591324   55608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:46.591848   55608 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:46.592574   55608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:14:46.592600   55608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:14:46.594955   55608 main.go:141] libmachine: (calico-855101) DBG | domain calico-855101 has defined MAC address 52:54:00:7d:43:c4 in network mk-calico-855101
	I1205 21:14:46.595243   55608 main.go:141] libmachine: (calico-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:43:c4", ip: ""} in network mk-calico-855101: {Iface:virbr3 ExpiryTime:2023-12-05 22:14:01 +0000 UTC Type:0 Mac:52:54:00:7d:43:c4 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:calico-855101 Clientid:01:52:54:00:7d:43:c4}
	I1205 21:14:46.595264   55608 main.go:141] libmachine: (calico-855101) DBG | domain calico-855101 has defined IP address 192.168.61.48 and MAC address 52:54:00:7d:43:c4 in network mk-calico-855101
	I1205 21:14:46.595503   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHPort
	I1205 21:14:46.597331   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHKeyPath
	I1205 21:14:46.597529   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHUsername
	I1205 21:14:46.597995   55608 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/calico-855101/id_rsa Username:docker}
	I1205 21:14:46.616329   55608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I1205 21:14:46.616880   55608 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:14:46.617380   55608 main.go:141] libmachine: Using API Version  1
	I1205 21:14:46.617398   55608 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:14:46.617845   55608 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:14:46.618128   55608 main.go:141] libmachine: (calico-855101) Calling .GetState
	I1205 21:14:46.620137   55608 main.go:141] libmachine: (calico-855101) Calling .DriverName
	I1205 21:14:46.620378   55608 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:14:46.620388   55608 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:14:46.620399   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHHostname
	I1205 21:14:46.624391   55608 main.go:141] libmachine: (calico-855101) DBG | domain calico-855101 has defined MAC address 52:54:00:7d:43:c4 in network mk-calico-855101
	I1205 21:14:46.624880   55608 main.go:141] libmachine: (calico-855101) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:43:c4", ip: ""} in network mk-calico-855101: {Iface:virbr3 ExpiryTime:2023-12-05 22:14:01 +0000 UTC Type:0 Mac:52:54:00:7d:43:c4 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:calico-855101 Clientid:01:52:54:00:7d:43:c4}
	I1205 21:14:46.624899   55608 main.go:141] libmachine: (calico-855101) DBG | domain calico-855101 has defined IP address 192.168.61.48 and MAC address 52:54:00:7d:43:c4 in network mk-calico-855101
	I1205 21:14:46.625201   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHPort
	I1205 21:14:46.625353   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHKeyPath
	I1205 21:14:46.625466   55608 main.go:141] libmachine: (calico-855101) Calling .GetSSHUsername
	I1205 21:14:46.625555   55608 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/calico-855101/id_rsa Username:docker}
	I1205 21:14:46.762019   55608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:14:46.794664   55608 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:14:46.803071   55608 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 21:14:46.823382   55608 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-855101" context rescaled to 1 replicas
	I1205 21:14:46.823420   55608 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:14:46.826260   55608 out.go:177] * Verifying Kubernetes components...
	I1205 21:14:42.993163   56787 machine.go:88] provisioning docker machine ...
	I1205 21:14:42.993196   56787 main.go:141] libmachine: (newest-cni-051721) Calling .DriverName
	I1205 21:14:42.993395   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetMachineName
	I1205 21:14:42.993572   56787 buildroot.go:166] provisioning hostname "newest-cni-051721"
	I1205 21:14:42.993596   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetMachineName
	I1205 21:14:42.993780   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetSSHHostname
	I1205 21:14:42.997014   56787 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:14:42.997534   56787 main.go:141] libmachine: (newest-cni-051721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:29:b0", ip: ""} in network mk-newest-cni-051721: {Iface:virbr2 ExpiryTime:2023-12-05 22:11:23 +0000 UTC Type:0 Mac:52:54:00:09:29:b0 Iaid: IPaddr:192.168.50.252 Prefix:24 Hostname:newest-cni-051721 Clientid:01:52:54:00:09:29:b0}
	I1205 21:14:42.997561   56787 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined IP address 192.168.50.252 and MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:14:42.997830   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetSSHPort
	I1205 21:14:42.997995   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetSSHKeyPath
	I1205 21:14:42.998145   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetSSHKeyPath
	I1205 21:14:42.998256   56787 main.go:141] libmachine: (newest-cni-051721) Calling .GetSSHUsername
	I1205 21:14:42.998470   56787 main.go:141] libmachine: Using SSH client type: native
	I1205 21:14:42.998863   56787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.252 22 <nil> <nil>}
	I1205 21:14:42.998883   56787 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-051721 && echo "newest-cni-051721" | sudo tee /etc/hostname
	I1205 21:14:45.842601   56787 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.252:22: connect: no route to host
	I1205 21:14:48.266488   56344 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.382664272s)
	I1205 21:14:48.266512   56344 crio.go:451] Took 3.382787 seconds to extract the tarball
	I1205 21:14:48.266521   56344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:14:48.310312   56344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:14:48.391938   56344 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 21:14:48.391965   56344 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:14:48.392023   56344 ssh_runner.go:195] Run: crio config
	I1205 21:14:48.467685   56344 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1205 21:14:48.467728   56344 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 21:14:48.467748   56344 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.65 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-855101 NodeName:custom-flannel-855101 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:14:48.467884   56344 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-855101"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:14:48.467967   56344 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=custom-flannel-855101 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-855101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:}
	I1205 21:14:48.468010   56344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 21:14:48.477844   56344 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:14:48.477932   56344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:14:48.488950   56344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 21:14:48.507131   56344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:14:48.525247   56344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1205 21:14:48.542820   56344 ssh_runner.go:195] Run: grep 192.168.72.65	control-plane.minikube.internal$ /etc/hosts
	I1205 21:14:48.546611   56344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:14:48.558794   56344 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101 for IP: 192.168.72.65
	I1205 21:14:48.558824   56344 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:48.559009   56344 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 21:14:48.559072   56344 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 21:14:48.559173   56344 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.key
	I1205 21:14:48.559199   56344 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.key.fa85241c
	I1205 21:14:48.559215   56344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.crt.fa85241c with IP's: [192.168.72.65 10.96.0.1 127.0.0.1 10.0.0.1]
	I1205 21:14:48.679253   56344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.crt.fa85241c ...
	I1205 21:14:48.679280   56344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.crt.fa85241c: {Name:mkf5752d93611f4f9acaec2b07417aa64ab68124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:48.679467   56344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.key.fa85241c ...
	I1205 21:14:48.679491   56344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.key.fa85241c: {Name:mk4aa46d5fd007cfb5731ddbb5ae95606474bbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:48.679581   56344 certs.go:337] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.crt.fa85241c -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.crt
	I1205 21:14:48.679654   56344 certs.go:341] copying /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.key.fa85241c -> /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.key
	I1205 21:14:48.679740   56344 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.key
	I1205 21:14:48.679757   56344 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.crt with IP's: []
	I1205 21:14:48.857282   56344 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.crt ...
	I1205 21:14:48.857311   56344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.crt: {Name:mk4129b809eb5fd575ddaa5eebf15a7888897309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:48.873944   56344 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.key ...
	I1205 21:14:48.873982   56344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.key: {Name:mk9853d12b3950cb1fa96968ac860f3ba2b960e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:14:48.874239   56344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 21:14:48.874305   56344 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 21:14:48.874317   56344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:14:48.874366   56344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 21:14:48.874401   56344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:14:48.874440   56344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 21:14:48.874492   56344 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 21:14:48.875328   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 21:14:48.899995   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:14:48.924201   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:14:48.949779   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:14:49.036623   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:14:49.061361   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:14:49.085363   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:14:49.108764   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:14:49.132460   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 21:14:49.156690   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 21:14:49.179574   56344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:14:49.201711   56344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:14:49.219950   56344 ssh_runner.go:195] Run: openssl version
	I1205 21:14:49.225846   56344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:14:49.237122   56344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:14:49.241797   56344 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:14:49.241860   56344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:14:49.247449   56344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:14:49.258725   56344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 21:14:49.270583   56344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 21:14:49.275523   56344 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 21:14:49.275578   56344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 21:14:49.281547   56344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 21:14:49.293999   56344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 21:14:49.308885   56344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 21:14:49.314220   56344 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 21:14:49.314324   56344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 21:14:49.322073   56344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:14:49.332691   56344 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 21:14:49.337167   56344 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1205 21:14:49.337219   56344 kubeadm.go:404] StartCluster: {Name:custom-flannel-855101 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:custom-flannel-855101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.65 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 21:14:49.337331   56344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:14:49.337384   56344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:14:49.381036   56344 cri.go:89] found id: ""
	I1205 21:14:49.381094   56344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:14:49.391275   56344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:14:49.400825   56344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:14:49.410432   56344 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:14:49.410487   56344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:14:49.469432   56344 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 21:14:49.469551   56344 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 21:14:49.637452   56344 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:14:49.637599   56344 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:14:49.637775   56344 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:14:49.945898   56344 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:14:46.827868   55608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:14:48.425729   55608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.663661722s)
	I1205 21:14:48.425796   55608 main.go:141] libmachine: Making call to close driver server
	I1205 21:14:48.425810   55608 main.go:141] libmachine: (calico-855101) Calling .Close
	I1205 21:14:48.426332   55608 main.go:141] libmachine: (calico-855101) DBG | Closing plugin on server side
	I1205 21:14:48.426398   55608 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:14:48.426423   55608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:14:48.426441   55608 main.go:141] libmachine: Making call to close driver server
	I1205 21:14:48.426455   55608 main.go:141] libmachine: (calico-855101) Calling .Close
	I1205 21:14:48.426680   55608 main.go:141] libmachine: (calico-855101) DBG | Closing plugin on server side
	I1205 21:14:48.426708   55608 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:14:48.426721   55608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:14:49.326694   55608 main.go:141] libmachine: Making call to close driver server
	I1205 21:14:49.326723   55608 main.go:141] libmachine: (calico-855101) Calling .Close
	I1205 21:14:49.327042   55608 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:14:49.327070   55608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:14:50.032212   56344 out.go:204]   - Generating certificates and keys ...
	I1205 21:14:50.032342   56344 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 21:14:50.032429   56344 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 21:14:50.169212   56344 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 21:14:50.417729   56344 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1205 21:14:50.521812   56344 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1205 21:14:50.601356   56344 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1205 21:14:50.683730   55608 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.880609246s)
	I1205 21:14:50.683766   55608 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.855874127s)
	I1205 21:14:50.683789   55608 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.889080376s)
	I1205 21:14:50.683841   55608 main.go:141] libmachine: Making call to close driver server
	I1205 21:14:50.683854   55608 main.go:141] libmachine: (calico-855101) Calling .Close
	I1205 21:14:50.683763   55608 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 21:14:50.684240   55608 main.go:141] libmachine: (calico-855101) DBG | Closing plugin on server side
	I1205 21:14:50.684251   55608 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:14:50.684264   55608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:14:50.684280   55608 main.go:141] libmachine: Making call to close driver server
	I1205 21:14:50.684310   55608 main.go:141] libmachine: (calico-855101) Calling .Close
	I1205 21:14:50.684553   55608 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:14:50.684567   55608 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:14:50.686261   55608 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 21:14:50.686710   56344 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1205 21:14:50.686910   56344 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-855101 localhost] and IPs [192.168.72.65 127.0.0.1 ::1]
	I1205 21:14:50.941789   56344 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1205 21:14:50.942018   56344 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-855101 localhost] and IPs [192.168.72.65 127.0.0.1 ::1]
	I1205 21:14:50.998960   56344 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 21:14:51.443257   56344 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 21:14:51.552725   56344 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1205 21:14:51.553137   56344 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:14:51.799940   56344 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:14:51.886446   56344 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:14:52.045809   56344 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:14:52.168144   56344 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:14:52.168949   56344 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:14:52.173548   56344 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:14:48.918484   56787 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.252:22: connect: no route to host
	I1205 21:14:50.685122   55608 node_ready.go:35] waiting up to 15m0s for node "calico-855101" to be "Ready" ...
	I1205 21:14:50.688446   55608 addons.go:502] enable addons completed in 4.154506361s: enabled=[default-storageclass storage-provisioner]
	I1205 21:14:52.700343   55608 node_ready.go:58] node "calico-855101" has status "Ready":"False"
	I1205 21:14:54.700563   55608 node_ready.go:58] node "calico-855101" has status "Ready":"False"
	I1205 21:14:52.176379   56344 out.go:204]   - Booting up control plane ...
	I1205 21:14:52.176537   56344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:14:52.176641   56344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:14:52.176741   56344 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:14:52.191704   56344 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:14:52.192640   56344 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:14:52.192725   56344 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 21:14:52.330320   56344 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:14:54.994505   56787 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.252:22: connect: no route to host
	I1205 21:14:57.201074   55608 node_ready.go:58] node "calico-855101" has status "Ready":"False"
	I1205 21:14:57.717092   55608 node_ready.go:49] node "calico-855101" has status "Ready":"True"
	I1205 21:14:57.717124   55608 node_ready.go:38] duration metric: took 7.028690457s waiting for node "calico-855101" to be "Ready" ...
	I1205 21:14:57.717136   55608 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:14:57.729334   55608 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-7c968b5878-kl9bm" in "kube-system" namespace to be "Ready" ...
	I1205 21:14:59.774103   55608 pod_ready.go:102] pod "calico-kube-controllers-7c968b5878-kl9bm" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:51:54 UTC, ends at Tue 2023-12-05 21:15:01 UTC. --
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.226144419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810901226129563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3e44ada8-009e-45f1-9b81-0125e8eae956 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.226765697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c6daa728-dd19-41fc-ac29-0e0fd2b22729 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.226838101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c6daa728-dd19-41fc-ac29-0e0fd2b22729 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.227070684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c6daa728-dd19-41fc-ac29-0e0fd2b22729 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.277032558Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e4671521-e4f4-411c-ae60-37020d296181 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.277150662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e4671521-e4f4-411c-ae60-37020d296181 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.279744216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=55dd13a5-14e8-48c2-b32a-ee0655861f8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.280572648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810901280548124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=55dd13a5-14e8-48c2-b32a-ee0655861f8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.281773167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd8c28e8-4cc6-44a2-99ba-eca0ce538189 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.281874313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd8c28e8-4cc6-44a2-99ba-eca0ce538189 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.282152657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd8c28e8-4cc6-44a2-99ba-eca0ce538189 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.338134189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=74920b94-562e-432c-9b17-0198900fd402 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.338397584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=74920b94-562e-432c-9b17-0198900fd402 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.343649929Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8f876b5b-6301-4ac6-821b-e4df24d06e01 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.344292807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810901344170395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8f876b5b-6301-4ac6-821b-e4df24d06e01 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.345185609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9871fa72-bf6d-410c-abdf-238c490cccdd name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.345352524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9871fa72-bf6d-410c-abdf-238c490cccdd name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.345634801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9871fa72-bf6d-410c-abdf-238c490cccdd name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.398477184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6703c5b4-fb5b-4198-b78c-3cbefef22942 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.398566894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6703c5b4-fb5b-4198-b78c-3cbefef22942 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.400371912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fc55d4d5-029a-4429-8dea-b18054be5494 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.400873436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810901400854252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fc55d4d5-029a-4429-8dea-b18054be5494 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.401698116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8f131ec1-d637-4e96-a506-57ab624d1ae7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.401779176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f131ec1-d637-4e96-a506-57ab624d1ae7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:15:01 default-k8s-diff-port-463614 crio[721]: time="2023-12-05 21:15:01.401981209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809580941777360,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd9f43596f48437709bdf2bd4a901f53485dceb65c6c271cea4618d080762521,PodSandboxId:4b4aab3e6752716f2a257a33256bbc0e73a403130d3c01620232dd44cc9ec258,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701809558572686023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 583b1351-dfeb-4b29-ad50-7e4c204c9931,},Annotations:map[string]string{io.kubernetes.container.hash: 577813a4,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc,PodSandboxId:8cf1f37bacb0677b18ffd4f1402564cbcfa4739a47c54714609b9934d6db956f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809557038903419,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6pmzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69d0b16d-31bd-4db1-b165-ddbb870d5d48,},Annotations:map[string]string{io.kubernetes.container.hash: a35b0c16,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2,PodSandboxId:50a0a6b4bb2f305900b06d65914b7db475ffc11584ce1f9219b2bb9269490e3c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701809549684904727,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 8662a670-097a-47a4-8839-b65bd104c45a,},Annotations:map[string]string{io.kubernetes.container.hash: 2850abf9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d,PodSandboxId:001db274a0604344abc10ceddadab107b9483f63777692e1ca049df62f66ad75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809549637307096,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4zct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
9655fb8-d84f-4894-9fae-d606eb66ca04,},Annotations:map[string]string{io.kubernetes.container.hash: ddd25ed4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb,PodSandboxId:4834a14fc48a04865abcdf84e1478c9f2203b0ff44953595984dccf7e3a3dcc6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809543254384136,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: a08f0a8c4102b62c708135b3b2642710,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3,PodSandboxId:a68e05aae5679351778d5b8bf8084f53f81ec6c4104a1263b7f659bf6c0e9064,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809543303444102,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc536825cb78da3722788f2466c6919,},An
notations:map[string]string{io.kubernetes.container.hash: 78144c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa,PodSandboxId:9d5fe60297906143d53259ab9b376ca4f9e0301f4f7c21197fddbff0a7529c7b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809542875839578,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e73
90c4d36a6e2076133b2d84132461a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883,PodSandboxId:865c4d68c1b2f78ba7702929fe97b467db934810eafe25d5902671c072894708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809542767698848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-463614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7
8fae71f91846e31845c34f4e0fa4e,},Annotations:map[string]string{io.kubernetes.container.hash: ce19085a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f131ec1-d637-4e96-a506-57ab624d1ae7 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a816a407fb68       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   50a0a6b4bb2f3       storage-provisioner
	fd9f43596f484       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   4b4aab3e67527       busybox
	95dae582422a9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   8cf1f37bacb06       coredns-5dd5756b68-6pmzf
	6c766515e85b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   50a0a6b4bb2f3       storage-provisioner
	15eee84995781       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      22 minutes ago      Running             kube-proxy                1                   001db274a0604       kube-proxy-g4zct
	1eed3a831d6e9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   a68e05aae5679       etcd-default-k8s-diff-port-463614
	e019875171430       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      22 minutes ago      Running             kube-scheduler            1                   4834a14fc48a0       kube-scheduler-default-k8s-diff-port-463614
	fa3b51839f012       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      22 minutes ago      Running             kube-controller-manager   1                   9d5fe60297906       kube-controller-manager-default-k8s-diff-port-463614
	fad43ea2e090b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      22 minutes ago      Running             kube-apiserver            1                   865c4d68c1b2f       kube-apiserver-default-k8s-diff-port-463614
	
	* 
	* ==> coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59433 - 17018 "HINFO IN 1264421714362086919.719460568605505053. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009575149s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-463614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-463614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=default-k8s-diff-port-463614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_46_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:46:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-463614
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 21:14:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:13:23 +0000   Tue, 05 Dec 2023 20:46:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:13:23 +0000   Tue, 05 Dec 2023 20:46:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:13:23 +0000   Tue, 05 Dec 2023 20:46:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:13:23 +0000   Tue, 05 Dec 2023 20:52:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    default-k8s-diff-port-463614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd9c01fd3ee04a0dbbc7cf967abdc193
	  System UUID:                bd9c01fd-3ee0-4a0d-bbc7-cf967abdc193
	  Boot ID:                    e373c9bb-46f6-4c58-b07a-48ad227830a0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-6pmzf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-463614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-463614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-463614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-g4zct                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-463614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-676m6                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-463614 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-463614 event: Registered Node default-k8s-diff-port-463614 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-463614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-463614 event: Registered Node default-k8s-diff-port-463614 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.082144] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.778746] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.201374] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.633567] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 20:52] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.111320] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.149758] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.120003] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.269746] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[ +18.008163] systemd-fstab-generator[920]: Ignoring "noauto" for root device
	[ +15.077639] kauditd_printk_skb: 19 callbacks suppressed
	[Dec 5 21:14] hrtimer: interrupt took 3349634 ns
	
	* 
	* ==> etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] <==
	* {"level":"info","ts":"2023-12-05T21:11:40.184415Z","caller":"traceutil/trace.go:171","msg":"trace[802258956] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:1489; }","duration":"106.245748ms","start":"2023-12-05T21:11:40.078137Z","end":"2023-12-05T21:11:40.184382Z","steps":["trace[802258956] 'count revisions from in-memory index tree'  (duration: 105.769425ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:12:05.108558Z","caller":"traceutil/trace.go:171","msg":"trace[1417962584] linearizableReadLoop","detail":"{readStateIndex:1778; appliedIndex:1777; }","duration":"123.674972ms","start":"2023-12-05T21:12:04.984851Z","end":"2023-12-05T21:12:05.108526Z","steps":["trace[1417962584] 'read index received'  (duration: 123.533576ms)","trace[1417962584] 'applied index is now lower than readState.Index'  (duration: 133.674µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T21:12:05.108749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.883675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T21:12:05.108783Z","caller":"traceutil/trace.go:171","msg":"trace[1384080149] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1509; }","duration":"123.948287ms","start":"2023-12-05T21:12:04.98482Z","end":"2023-12-05T21:12:05.108769Z","steps":["trace[1384080149] 'agreement among raft nodes before linearized reading'  (duration: 123.849275ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:12:05.583448Z","caller":"traceutil/trace.go:171","msg":"trace[1670457543] transaction","detail":"{read_only:false; response_revision:1510; number_of_response:1; }","duration":"155.952608ms","start":"2023-12-05T21:12:05.427464Z","end":"2023-12-05T21:12:05.583417Z","steps":["trace[1670457543] 'process raft request'  (duration: 155.455944ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:12:26.475394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1284}
	{"level":"info","ts":"2023-12-05T21:12:26.476914Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1284,"took":"1.137759ms","hash":2162453826}
	{"level":"info","ts":"2023-12-05T21:12:26.477105Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2162453826,"revision":1284,"compact-revision":1041}
	{"level":"warn","ts":"2023-12-05T21:12:38.103665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.963823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T21:12:38.103807Z","caller":"traceutil/trace.go:171","msg":"trace[1330754337] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1536; }","duration":"241.168675ms","start":"2023-12-05T21:12:37.862603Z","end":"2023-12-05T21:12:38.103772Z","steps":["trace[1330754337] 'count revisions from in-memory index tree'  (duration: 240.620038ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T21:12:38.103721Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.215712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T21:12:38.103956Z","caller":"traceutil/trace.go:171","msg":"trace[1579142358] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1536; }","duration":"118.442987ms","start":"2023-12-05T21:12:37.985478Z","end":"2023-12-05T21:12:38.103921Z","steps":["trace[1579142358] 'range keys from in-memory index tree'  (duration: 118.143946ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:13:26.137602Z","caller":"traceutil/trace.go:171","msg":"trace[1944109072] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"102.95801ms","start":"2023-12-05T21:13:26.034603Z","end":"2023-12-05T21:13:26.137561Z","steps":["trace[1944109072] 'process raft request'  (duration: 102.818448ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:14:20.088706Z","caller":"traceutil/trace.go:171","msg":"trace[452183908] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"281.36452ms","start":"2023-12-05T21:14:19.807297Z","end":"2023-12-05T21:14:20.088662Z","steps":["trace[452183908] 'process raft request'  (duration: 264.35255ms)","trace[452183908] 'compare'  (duration: 16.678272ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-05T21:14:20.088549Z","caller":"traceutil/trace.go:171","msg":"trace[2081331988] linearizableReadLoop","detail":"{readStateIndex:1919; appliedIndex:1918; }","duration":"222.397301ms","start":"2023-12-05T21:14:19.866096Z","end":"2023-12-05T21:14:20.088494Z","steps":["trace[2081331988] 'read index received'  (duration: 205.50611ms)","trace[2081331988] 'applied index is now lower than readState.Index'  (duration: 16.889673ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-05T21:14:20.089062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.83236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-12-05T21:14:20.089339Z","caller":"traceutil/trace.go:171","msg":"trace[1850458123] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1621; }","duration":"223.273372ms","start":"2023-12-05T21:14:19.866048Z","end":"2023-12-05T21:14:20.089321Z","steps":["trace[1850458123] 'agreement among raft nodes before linearized reading'  (duration: 222.726237ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T21:14:20.089641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.21349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T21:14:20.089716Z","caller":"traceutil/trace.go:171","msg":"trace[769721729] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1621; }","duration":"103.294974ms","start":"2023-12-05T21:14:19.98641Z","end":"2023-12-05T21:14:20.089705Z","steps":["trace[769721729] 'agreement among raft nodes before linearized reading'  (duration: 103.191075ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-05T21:14:20.789548Z","caller":"traceutil/trace.go:171","msg":"trace[1410198833] transaction","detail":"{read_only:false; response_revision:1622; number_of_response:1; }","duration":"302.004638ms","start":"2023-12-05T21:14:20.487525Z","end":"2023-12-05T21:14:20.78953Z","steps":["trace[1410198833] 'process raft request'  (duration: 301.822697ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T21:14:20.789862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-05T21:14:20.487508Z","time spent":"302.194206ms","remote":"127.0.0.1:43288","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":605,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1620 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:532 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-12-05T21:14:21.078181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.575291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-05T21:14:21.078364Z","caller":"traceutil/trace.go:171","msg":"trace[2055441229] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1622; }","duration":"152.753303ms","start":"2023-12-05T21:14:20.925581Z","end":"2023-12-05T21:14:21.078335Z","steps":["trace[2055441229] 'range keys from in-memory index tree'  (duration: 152.473064ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-05T21:14:49.904383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.458207ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9083070211743074987 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.27\" mod_revision:1637 > success:<request_put:<key:\"/registry/masterleases/192.168.39.27\" value_size:67 lease:9083070211743074985 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.27\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-05T21:14:49.904478Z","caller":"traceutil/trace.go:171","msg":"trace[1922624725] transaction","detail":"{read_only:false; response_revision:1645; number_of_response:1; }","duration":"205.790379ms","start":"2023-12-05T21:14:49.698676Z","end":"2023-12-05T21:14:49.904466Z","steps":["trace[1922624725] 'process raft request'  (duration: 69.180953ms)","trace[1922624725] 'compare'  (duration: 136.215825ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  21:15:01 up 23 min,  0 users,  load average: 1.04, 0.48, 0.25
	Linux default-k8s-diff-port-463614 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] <==
	* E1205 21:10:29.366780       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:10:29.368398       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:11:28.190618       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 21:12:28.191301       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:12:28.368768       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:12:28.368908       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:12:28.369427       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:12:29.369507       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:12:29.369783       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:12:29.369801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:12:29.369719       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:12:29.370009       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:12:29.370885       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:13:28.190866       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:13:29.370888       1 handler_proxy.go:93] no RequestInfo found in the context
	W1205 21:13:29.371020       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:13:29.371108       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:13:29.371138       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1205 21:13:29.371174       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:13:29.372499       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:14:28.190675       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] <==
	* I1205 21:09:14.380515       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:09:43.728394       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:09:44.389472       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:10:13.733840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:10:14.399178       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:10:43.741718       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:10:44.408305       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:11:13.748348       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:11:14.418412       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:11:43.757107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:11:44.428952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:12:13.765376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:12:14.439774       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:12:43.770319       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:12:44.449881       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:13:13.775386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:13:14.459315       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:13:43.788509       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:13:44.468349       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:13:49.735671       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="769.723µs"
	I1205 21:14:02.724858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="184.813µs"
	E1205 21:14:13.794635       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:14:14.480916       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:14:43.803035       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:14:44.495706       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] <==
	* I1205 20:52:29.919352       1 server_others.go:69] "Using iptables proxy"
	I1205 20:52:29.936504       1 node.go:141] Successfully retrieved node IP: 192.168.39.27
	I1205 20:52:30.023040       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:52:30.023149       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:52:30.026783       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:52:30.026893       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:52:30.027411       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:52:30.027653       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:52:30.028836       1 config.go:188] "Starting service config controller"
	I1205 20:52:30.028907       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:52:30.028959       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:52:30.028984       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:52:30.031092       1 config.go:315] "Starting node config controller"
	I1205 20:52:30.031148       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:52:30.130271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:52:30.130712       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:52:30.132269       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] <==
	* I1205 20:52:26.228126       1 serving.go:348] Generated self-signed cert in-memory
	W1205 20:52:28.343829       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 20:52:28.343939       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:52:28.343981       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 20:52:28.344015       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 20:52:28.395018       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1205 20:52:28.395047       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:52:28.397561       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 20:52:28.397723       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 20:52:28.398730       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 20:52:28.399036       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 20:52:28.498878       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:51:54 UTC, ends at Tue 2023-12-05 21:15:02 UTC. --
	Dec 05 21:12:21 default-k8s-diff-port-463614 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:12:31 default-k8s-diff-port-463614 kubelet[926]: E1205 21:12:31.711029     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:12:42 default-k8s-diff-port-463614 kubelet[926]: E1205 21:12:42.710720     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:12:57 default-k8s-diff-port-463614 kubelet[926]: E1205 21:12:57.710977     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:13:08 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:08.710612     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:13:21 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:21.715115     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:13:21 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:21.730173     926 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:13:21 default-k8s-diff-port-463614 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:13:21 default-k8s-diff-port-463614 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:13:21 default-k8s-diff-port-463614 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:13:34 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:34.730717     926 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 21:13:34 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:34.730788     926 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 21:13:34 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:34.731126     926 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h292q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-676m6_kube-system(dc304fd9-2922-42f7-b917-5618c6d43f8d): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:13:34 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:34.731183     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:13:49 default-k8s-diff-port-463614 kubelet[926]: E1205 21:13:49.711705     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:14:02 default-k8s-diff-port-463614 kubelet[926]: E1205 21:14:02.710394     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:14:13 default-k8s-diff-port-463614 kubelet[926]: E1205 21:14:13.711984     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:14:21 default-k8s-diff-port-463614 kubelet[926]: E1205 21:14:21.730253     926 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:14:21 default-k8s-diff-port-463614 kubelet[926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:14:21 default-k8s-diff-port-463614 kubelet[926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:14:21 default-k8s-diff-port-463614 kubelet[926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:14:25 default-k8s-diff-port-463614 kubelet[926]: E1205 21:14:25.711413     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:14:36 default-k8s-diff-port-463614 kubelet[926]: E1205 21:14:36.711876     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:14:49 default-k8s-diff-port-463614 kubelet[926]: E1205 21:14:49.711666     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	Dec 05 21:15:01 default-k8s-diff-port-463614 kubelet[926]: E1205 21:15:01.712739     926 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-676m6" podUID="dc304fd9-2922-42f7-b917-5618c6d43f8d"
	
	* 
	* ==> storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] <==
	* I1205 20:53:01.059041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:53:01.076782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:53:01.076835       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:53:01.088963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:53:01.089280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-463614_09e621b0-c856-46e0-ad49-a5857e033895!
	I1205 20:53:01.090306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"283e2cb4-6883-45c7-9630-2d26c91f65d8", APIVersion:"v1", ResourceVersion:"562", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-463614_09e621b0-c856-46e0-ad49-a5857e033895 became leader
	I1205 20:53:01.189783       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-463614_09e621b0-c856-46e0-ad49-a5857e033895!
	
	* 
	* ==> storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] <==
	* I1205 20:52:29.906546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 20:52:59.910790       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-676m6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 describe pod metrics-server-57f55c9bc5-676m6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-463614 describe pod metrics-server-57f55c9bc5-676m6: exit status 1 (89.867034ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-676m6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-463614 describe pod metrics-server-57f55c9bc5-676m6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (289.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 21:06:40.012643   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143651 -n no-preload-143651
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:11:06.871436376 +0000 UTC m=+5779.683896010
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-143651 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-143651 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.117µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-143651 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-143651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-143651 logs -n 25: (1.330979929s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-405510                                        | pause-405510                 | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-601680                              | stopped-upgrade-601680       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 21:11 UTC | 05 Dec 23 21:11 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:49:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:49:16.268811   47365 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:49:16.269102   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269113   47365 out.go:309] Setting ErrFile to fd 2...
	I1205 20:49:16.269117   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269306   47365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:49:16.269873   47365 out.go:303] Setting JSON to false
	I1205 20:49:16.270847   47365 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5509,"bootTime":1701803847,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:49:16.270909   47365 start.go:138] virtualization: kvm guest
	I1205 20:49:16.273160   47365 out.go:177] * [default-k8s-diff-port-463614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:49:16.275265   47365 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:49:16.275288   47365 notify.go:220] Checking for updates...
	I1205 20:49:16.276797   47365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:49:16.278334   47365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:49:16.279902   47365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:49:16.281580   47365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:49:16.283168   47365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:49:16.285134   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:49:16.285533   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.285605   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.300209   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I1205 20:49:16.300585   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.301134   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.301159   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.301488   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.301644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.301873   47365 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:49:16.302164   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.302215   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.317130   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:49:16.317591   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.318064   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.318086   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.318475   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.318691   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.356580   47365 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:49:16.358350   47365 start.go:298] selected driver: kvm2
	I1205 20:49:16.358368   47365 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.358501   47365 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:49:16.359194   47365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.359276   47365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:49:16.374505   47365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:49:16.374939   47365 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:49:16.374999   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:49:16.375009   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:49:16.375022   47365 start_flags.go:323] config:
	{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-46361
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.375188   47365 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.377202   47365 out.go:177] * Starting control plane node default-k8s-diff-port-463614 in cluster default-k8s-diff-port-463614
	I1205 20:49:16.338499   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:19.410522   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:16.379191   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:49:16.379245   47365 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:49:16.379253   47365 cache.go:56] Caching tarball of preloaded images
	I1205 20:49:16.379352   47365 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:49:16.379364   47365 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:49:16.379500   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:49:16.379715   47365 start.go:365] acquiring machines lock for default-k8s-diff-port-463614: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:49:25.490576   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:28.562621   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:34.642596   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:37.714630   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:43.794573   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:46.866618   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:52.946521   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:56.018552   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:02.098566   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:05.170641   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:11.250570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:14.322507   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:20.402570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:23.474581   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:29.554568   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:32.626541   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:38.706589   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:41.778594   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:47.858626   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:50.930560   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:57.010496   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:00.082587   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:03.086325   46700 start.go:369] acquired machines lock for "old-k8s-version-061206" in 4m14.42699626s
	I1205 20:51:03.086377   46700 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:03.086392   46700 fix.go:54] fixHost starting: 
	I1205 20:51:03.086799   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:03.086835   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:03.101342   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1205 20:51:03.101867   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:03.102378   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:51:03.102403   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:03.102792   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:03.103003   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:03.103208   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:51:03.104894   46700 fix.go:102] recreateIfNeeded on old-k8s-version-061206: state=Stopped err=<nil>
	I1205 20:51:03.104914   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	W1205 20:51:03.105115   46700 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:03.106835   46700 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-061206" ...
	I1205 20:51:03.108621   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Start
	I1205 20:51:03.108840   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring networks are active...
	I1205 20:51:03.109627   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network default is active
	I1205 20:51:03.110007   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network mk-old-k8s-version-061206 is active
	I1205 20:51:03.110401   46700 main.go:141] libmachine: (old-k8s-version-061206) Getting domain xml...
	I1205 20:51:03.111358   46700 main.go:141] libmachine: (old-k8s-version-061206) Creating domain...
	I1205 20:51:03.084237   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:03.084288   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:51:03.086163   46374 machine.go:91] provisioned docker machine in 4m37.408875031s
	I1205 20:51:03.086199   46374 fix.go:56] fixHost completed within 4m37.430079633s
	I1205 20:51:03.086204   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 4m37.430101514s
	W1205 20:51:03.086231   46374 start.go:694] error starting host: provision: host is not running
	W1205 20:51:03.086344   46374 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:51:03.086356   46374 start.go:709] Will try again in 5 seconds ...
	I1205 20:51:04.367947   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting to get IP...
	I1205 20:51:04.368825   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.369277   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.369387   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.369246   47662 retry.go:31] will retry after 251.730796ms: waiting for machine to come up
	I1205 20:51:04.622984   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.623402   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.623431   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.623354   47662 retry.go:31] will retry after 383.862516ms: waiting for machine to come up
	I1205 20:51:05.008944   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.009308   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.009336   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.009237   47662 retry.go:31] will retry after 412.348365ms: waiting for machine to come up
	I1205 20:51:05.422846   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.423235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.423253   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.423198   47662 retry.go:31] will retry after 568.45875ms: waiting for machine to come up
	I1205 20:51:05.992882   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.993236   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.993264   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.993182   47662 retry.go:31] will retry after 494.410091ms: waiting for machine to come up
	I1205 20:51:06.488852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:06.489210   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:06.489235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:06.489151   47662 retry.go:31] will retry after 640.351521ms: waiting for machine to come up
	I1205 20:51:07.130869   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:07.131329   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:07.131355   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:07.131273   47662 retry.go:31] will retry after 1.164209589s: waiting for machine to come up
	I1205 20:51:08.296903   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:08.297333   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:08.297365   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:08.297280   47662 retry.go:31] will retry after 1.479760715s: waiting for machine to come up
	I1205 20:51:08.087457   46374 start.go:365] acquiring machines lock for embed-certs-331495: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:09.778949   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:09.779414   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:09.779435   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:09.779379   47662 retry.go:31] will retry after 1.577524888s: waiting for machine to come up
	I1205 20:51:11.359094   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:11.359468   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:11.359499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:11.359405   47662 retry.go:31] will retry after 1.742003001s: waiting for machine to come up
	I1205 20:51:13.103927   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:13.104416   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:13.104446   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:13.104365   47662 retry.go:31] will retry after 2.671355884s: waiting for machine to come up
	I1205 20:51:15.777050   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:15.777542   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:15.777573   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:15.777491   47662 retry.go:31] will retry after 2.435682478s: waiting for machine to come up
	I1205 20:51:18.214485   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:18.214943   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:18.214965   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:18.214920   47662 retry.go:31] will retry after 2.827460605s: waiting for machine to come up
	I1205 20:51:22.191314   46866 start.go:369] acquired machines lock for "no-preload-143651" in 4m16.377152417s
	I1205 20:51:22.191373   46866 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:22.191380   46866 fix.go:54] fixHost starting: 
	I1205 20:51:22.191764   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:22.191801   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:22.208492   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I1205 20:51:22.208882   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:22.209423   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:51:22.209448   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:22.209839   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:22.210041   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:22.210202   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:51:22.211737   46866 fix.go:102] recreateIfNeeded on no-preload-143651: state=Stopped err=<nil>
	I1205 20:51:22.211762   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	W1205 20:51:22.211960   46866 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:22.214319   46866 out.go:177] * Restarting existing kvm2 VM for "no-preload-143651" ...
	I1205 20:51:21.044392   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044931   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has current primary IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044953   46700 main.go:141] libmachine: (old-k8s-version-061206) Found IP for machine: 192.168.50.116
	I1205 20:51:21.044964   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserving static IP address...
	I1205 20:51:21.045337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.045357   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserved static IP address: 192.168.50.116
	I1205 20:51:21.045371   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | skip adding static IP to network mk-old-k8s-version-061206 - found existing host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"}
	I1205 20:51:21.045381   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting for SSH to be available...
	I1205 20:51:21.045398   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Getting to WaitForSSH function...
	I1205 20:51:21.047343   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047678   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.047719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047758   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH client type: external
	I1205 20:51:21.047789   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa (-rw-------)
	I1205 20:51:21.047817   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:21.047832   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | About to run SSH command:
	I1205 20:51:21.047841   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | exit 0
	I1205 20:51:21.134741   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:21.135100   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetConfigRaw
	I1205 20:51:21.135770   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.138325   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138656   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.138689   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138908   46700 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:51:21.139128   46700 machine.go:88] provisioning docker machine ...
	I1205 20:51:21.139147   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.139351   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139516   46700 buildroot.go:166] provisioning hostname "old-k8s-version-061206"
	I1205 20:51:21.139534   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139714   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.141792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142136   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.142163   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142294   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.142471   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142609   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142741   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.142868   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.143244   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.143264   46700 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061206 && echo "old-k8s-version-061206" | sudo tee /etc/hostname
	I1205 20:51:21.267170   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061206
	
	I1205 20:51:21.267193   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.270042   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270524   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.270556   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270749   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.270945   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271115   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.271407   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.271735   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.271752   46700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061206/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:21.391935   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:21.391959   46700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:21.391983   46700 buildroot.go:174] setting up certificates
	I1205 20:51:21.391994   46700 provision.go:83] configureAuth start
	I1205 20:51:21.392002   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.392264   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.395020   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.395375   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395517   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.397499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397760   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.397792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397937   46700 provision.go:138] copyHostCerts
	I1205 20:51:21.397994   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:21.398007   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:21.398090   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:21.398222   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:21.398234   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:21.398293   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:21.398383   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:21.398394   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:21.398432   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:21.398499   46700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061206 san=[192.168.50.116 192.168.50.116 localhost 127.0.0.1 minikube old-k8s-version-061206]
	I1205 20:51:21.465637   46700 provision.go:172] copyRemoteCerts
	I1205 20:51:21.465701   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:21.465737   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.468386   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468688   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.468719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468896   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.469092   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.469232   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.469349   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:21.555915   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:21.578545   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:51:21.603058   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:21.624769   46700 provision.go:86] duration metric: configureAuth took 232.761874ms
	I1205 20:51:21.624798   46700 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:21.624972   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:51:21.625065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.627589   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.627953   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.627991   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.628085   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.628300   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628477   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628643   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.628867   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.629237   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.629262   46700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:21.945366   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:21.945398   46700 machine.go:91] provisioned docker machine in 806.257704ms
	I1205 20:51:21.945410   46700 start.go:300] post-start starting for "old-k8s-version-061206" (driver="kvm2")
	I1205 20:51:21.945423   46700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:21.945442   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.945803   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:21.945833   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.948699   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949083   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.949116   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949247   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.949455   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.949642   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.949780   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.036694   46700 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:22.040857   46700 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:22.040887   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:22.040961   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:22.041067   46700 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:22.041167   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:22.050610   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:22.072598   46700 start.go:303] post-start completed in 127.17514ms
	I1205 20:51:22.072621   46700 fix.go:56] fixHost completed within 18.986227859s
	I1205 20:51:22.072650   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.075382   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.075779   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.075809   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.076014   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.076218   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076390   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076548   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.076677   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:22.076979   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:22.076989   46700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:22.191127   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809482.140720971
	
	I1205 20:51:22.191150   46700 fix.go:206] guest clock: 1701809482.140720971
	I1205 20:51:22.191160   46700 fix.go:219] Guest: 2023-12-05 20:51:22.140720971 +0000 UTC Remote: 2023-12-05 20:51:22.072625275 +0000 UTC m=+273.566123117 (delta=68.095696ms)
	I1205 20:51:22.191206   46700 fix.go:190] guest clock delta is within tolerance: 68.095696ms
	I1205 20:51:22.191211   46700 start.go:83] releasing machines lock for "old-k8s-version-061206", held for 19.104851926s
	I1205 20:51:22.191239   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.191530   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:22.194285   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194676   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.194721   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194832   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195352   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195535   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195614   46700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:22.195660   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.195729   46700 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:22.195759   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.198085   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198438   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198493   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198522   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198619   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.198813   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.198893   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198922   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198980   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.199139   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.199172   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.199274   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199426   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.284598   46700 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:22.304917   46700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:22.454449   46700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:22.461344   46700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:22.461409   46700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:22.483106   46700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:22.483130   46700 start.go:475] detecting cgroup driver to use...
	I1205 20:51:22.483202   46700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:22.498157   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:22.510661   46700 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:22.510712   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:22.525004   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:22.538499   46700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:22.652874   46700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:22.787215   46700 docker.go:219] disabling docker service ...
	I1205 20:51:22.787272   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:22.800315   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:22.812031   46700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:22.926202   46700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:23.057043   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:23.072205   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:23.092858   46700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:51:23.092916   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.103613   46700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:23.103680   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.113992   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.124132   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.134007   46700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:23.144404   46700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:23.153679   46700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:23.153735   46700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:23.167935   46700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:23.178944   46700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:23.294314   46700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:23.469887   46700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:23.469957   46700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:23.475308   46700 start.go:543] Will wait 60s for crictl version
	I1205 20:51:23.475384   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:23.479436   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:23.520140   46700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:23.520223   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.572184   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.619296   46700 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1205 20:51:22.215866   46866 main.go:141] libmachine: (no-preload-143651) Calling .Start
	I1205 20:51:22.216026   46866 main.go:141] libmachine: (no-preload-143651) Ensuring networks are active...
	I1205 20:51:22.216719   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network default is active
	I1205 20:51:22.217060   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network mk-no-preload-143651 is active
	I1205 20:51:22.217553   46866 main.go:141] libmachine: (no-preload-143651) Getting domain xml...
	I1205 20:51:22.218160   46866 main.go:141] libmachine: (no-preload-143651) Creating domain...
	I1205 20:51:23.560327   46866 main.go:141] libmachine: (no-preload-143651) Waiting to get IP...
	I1205 20:51:23.561191   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.561601   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.561675   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.561566   47785 retry.go:31] will retry after 269.644015ms: waiting for machine to come up
	I1205 20:51:23.833089   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.833656   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.833695   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.833612   47785 retry.go:31] will retry after 363.018928ms: waiting for machine to come up
	I1205 20:51:24.198250   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.198767   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.198797   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.198717   47785 retry.go:31] will retry after 464.135158ms: waiting for machine to come up
	I1205 20:51:24.664518   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.664945   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.664970   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.664902   47785 retry.go:31] will retry after 383.704385ms: waiting for machine to come up
	I1205 20:51:25.050654   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.051112   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.051142   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.051078   47785 retry.go:31] will retry after 620.614799ms: waiting for machine to come up
	I1205 20:51:25.672997   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.673452   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.673485   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.673394   47785 retry.go:31] will retry after 594.447783ms: waiting for machine to come up
	I1205 20:51:23.620743   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:23.623372   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623672   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:23.623702   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623934   46700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:23.628382   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:23.642698   46700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 20:51:23.642770   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:23.686679   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:23.686776   46700 ssh_runner.go:195] Run: which lz4
	I1205 20:51:23.690994   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:51:23.695445   46700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:51:23.695480   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1205 20:51:25.519961   46700 crio.go:444] Took 1.828999 seconds to copy over tarball
	I1205 20:51:25.520052   46700 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:51:28.545261   46700 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025151809s)
	I1205 20:51:28.545291   46700 crio.go:451] Took 3.025302 seconds to extract the tarball
	I1205 20:51:28.545303   46700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:51:26.269269   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:26.269771   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:26.269815   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:26.269741   47785 retry.go:31] will retry after 872.968768ms: waiting for machine to come up
	I1205 20:51:27.144028   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:27.144505   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:27.144538   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:27.144467   47785 retry.go:31] will retry after 1.067988446s: waiting for machine to come up
	I1205 20:51:28.213709   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:28.214161   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:28.214184   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:28.214111   47785 retry.go:31] will retry after 1.483033238s: waiting for machine to come up
	I1205 20:51:29.699402   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:29.699928   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:29.699973   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:29.699861   47785 retry.go:31] will retry after 1.985034944s: waiting for machine to come up
	I1205 20:51:28.586059   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:28.631610   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:28.631643   46700 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:28.631749   46700 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.631797   46700 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.631754   46700 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.631937   46700 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.632007   46700 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:51:28.631930   46700 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.632029   46700 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.631760   46700 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633385   46700 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633397   46700 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:51:28.633416   46700 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.633494   46700 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.633496   46700 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.633512   46700 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.633518   46700 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.633497   46700 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.789873   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.811118   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.811610   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.818440   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.818470   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1205 20:51:28.820473   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.849060   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.855915   46700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1205 20:51:28.855966   46700 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.856023   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953211   46700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1205 20:51:28.953261   46700 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.953289   46700 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1205 20:51:28.953315   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953325   46700 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.953363   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.968680   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.992735   46700 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1205 20:51:28.992781   46700 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1205 20:51:28.992825   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992847   46700 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1205 20:51:28.992878   46700 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.992907   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992917   46700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1205 20:51:28.992830   46700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1205 20:51:28.992948   46700 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.992980   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.992994   46700 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.993009   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.993029   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992944   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.993064   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:29.193946   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:29.194040   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1205 20:51:29.194095   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1205 20:51:29.194188   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1205 20:51:29.194217   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1205 20:51:29.194257   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:29.194279   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1205 20:51:29.299767   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:51:29.299772   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1205 20:51:29.299836   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1205 20:51:29.299855   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1205 20:51:29.299870   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.304934   46700 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1205 20:51:29.304952   46700 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.305004   46700 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1205 20:51:31.467263   46700 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.162226207s)
	I1205 20:51:31.467295   46700 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1205 20:51:31.467342   46700 cache_images.go:92] LoadImages completed in 2.835682781s
	W1205 20:51:31.467425   46700 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1205 20:51:31.467515   46700 ssh_runner.go:195] Run: crio config
	I1205 20:51:31.527943   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:31.527968   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:31.527989   46700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:51:31.528016   46700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061206 NodeName:old-k8s-version-061206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:51:31.528162   46700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-061206"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-061206
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.116:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:51:31.528265   46700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-061206 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:51:31.528332   46700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1205 20:51:31.538013   46700 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:51:31.538090   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:51:31.547209   46700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:51:31.565720   46700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:51:31.582290   46700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1205 20:51:31.599081   46700 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I1205 20:51:31.603007   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:31.615348   46700 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206 for IP: 192.168.50.116
	I1205 20:51:31.615385   46700 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:31.615582   46700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:51:31.615657   46700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:51:31.615757   46700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.key
	I1205 20:51:31.615846   46700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key.ae4cb88a
	I1205 20:51:31.615902   46700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key
	I1205 20:51:31.616079   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:51:31.616150   46700 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:51:31.616172   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:51:31.616216   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:51:31.616261   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:51:31.616302   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:51:31.616375   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:31.617289   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:51:31.645485   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:51:31.675015   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:51:31.699520   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:51:31.727871   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:51:31.751623   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:51:31.776679   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:51:31.799577   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:51:31.827218   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:51:31.849104   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:51:31.870931   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:51:31.894940   46700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:51:31.912233   46700 ssh_runner.go:195] Run: openssl version
	I1205 20:51:31.918141   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:51:31.928422   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932915   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932985   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.938327   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:51:31.948580   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:51:31.958710   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963091   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963155   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.968667   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:51:31.981987   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:51:31.995793   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001622   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001709   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.008883   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:51:32.021378   46700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:51:32.025902   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:51:32.031917   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:51:32.037649   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:51:32.043121   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:51:32.048806   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:51:32.054266   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:51:32.060014   46700 kubeadm.go:404] StartCluster: {Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:51:32.060131   46700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:51:32.060186   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:32.101244   46700 cri.go:89] found id: ""
	I1205 20:51:32.101317   46700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:51:32.111900   46700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:51:32.111925   46700 kubeadm.go:636] restartCluster start
	I1205 20:51:32.111989   46700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:51:32.121046   46700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.122654   46700 kubeconfig.go:92] found "old-k8s-version-061206" server: "https://192.168.50.116:8443"
	I1205 20:51:32.126231   46700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:51:32.135341   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.135404   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.147308   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.147325   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.147367   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.158453   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.659254   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.659357   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.672490   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:33.159599   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.159693   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.171948   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:31.688072   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:31.688591   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:31.688627   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:31.688516   47785 retry.go:31] will retry after 1.83172898s: waiting for machine to come up
	I1205 20:51:33.521647   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:33.522137   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:33.522167   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:33.522083   47785 retry.go:31] will retry after 3.41334501s: waiting for machine to come up
	I1205 20:51:33.659273   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.659359   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.675427   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.158981   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.159075   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.173025   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.659439   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.659547   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.672184   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.159408   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.159472   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.173149   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.659490   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.659626   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.673261   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.159480   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.159569   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.172185   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.659417   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.659528   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.675853   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.159404   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.159495   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.172824   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.659361   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.659456   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.671599   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:38.158754   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.158834   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.171170   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.939441   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:36.939880   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:36.939905   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:36.939843   47785 retry.go:31] will retry after 3.715659301s: waiting for machine to come up
	I1205 20:51:40.659432   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659901   46866 main.go:141] libmachine: (no-preload-143651) Found IP for machine: 192.168.61.162
	I1205 20:51:40.659937   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has current primary IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659973   46866 main.go:141] libmachine: (no-preload-143651) Reserving static IP address...
	I1205 20:51:40.660324   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.660352   46866 main.go:141] libmachine: (no-preload-143651) Reserved static IP address: 192.168.61.162
	I1205 20:51:40.660372   46866 main.go:141] libmachine: (no-preload-143651) DBG | skip adding static IP to network mk-no-preload-143651 - found existing host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"}
	I1205 20:51:40.660391   46866 main.go:141] libmachine: (no-preload-143651) DBG | Getting to WaitForSSH function...
	I1205 20:51:40.660407   46866 main.go:141] libmachine: (no-preload-143651) Waiting for SSH to be available...
	I1205 20:51:40.662619   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663014   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.663042   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663226   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH client type: external
	I1205 20:51:40.663257   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa (-rw-------)
	I1205 20:51:40.663293   46866 main.go:141] libmachine: (no-preload-143651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:40.663312   46866 main.go:141] libmachine: (no-preload-143651) DBG | About to run SSH command:
	I1205 20:51:40.663328   46866 main.go:141] libmachine: (no-preload-143651) DBG | exit 0
	I1205 20:51:41.891099   47365 start.go:369] acquired machines lock for "default-k8s-diff-port-463614" in 2m25.511348838s
	I1205 20:51:41.891167   47365 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:41.891179   47365 fix.go:54] fixHost starting: 
	I1205 20:51:41.891625   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:41.891666   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:41.910556   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1205 20:51:41.910956   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:41.911447   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:51:41.911474   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:41.911792   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:41.912020   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:51:41.912168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:51:41.913796   47365 fix.go:102] recreateIfNeeded on default-k8s-diff-port-463614: state=Stopped err=<nil>
	I1205 20:51:41.913824   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	W1205 20:51:41.914032   47365 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:41.916597   47365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-463614" ...
	I1205 20:51:40.754683   46866 main.go:141] libmachine: (no-preload-143651) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:40.755055   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetConfigRaw
	I1205 20:51:40.755663   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:40.758165   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758502   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.758534   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758722   46866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:51:40.758916   46866 machine.go:88] provisioning docker machine ...
	I1205 20:51:40.758933   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:40.759160   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759358   46866 buildroot.go:166] provisioning hostname "no-preload-143651"
	I1205 20:51:40.759384   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759555   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.762125   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762513   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.762546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762688   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.762894   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763070   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763211   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.763392   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.763747   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.763761   46866 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143651 && echo "no-preload-143651" | sudo tee /etc/hostname
	I1205 20:51:40.895095   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143651
	
	I1205 20:51:40.895123   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.897864   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898199   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.898236   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898419   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.898629   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898814   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898972   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.899147   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.899454   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.899472   46866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143651/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:41.027721   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:41.027758   46866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:41.027802   46866 buildroot.go:174] setting up certificates
	I1205 20:51:41.027813   46866 provision.go:83] configureAuth start
	I1205 20:51:41.027827   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:41.028120   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.031205   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031561   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.031592   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031715   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.034163   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034531   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.034563   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034697   46866 provision.go:138] copyHostCerts
	I1205 20:51:41.034750   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:41.034767   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:41.034826   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:41.034918   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:41.034925   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:41.034947   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:41.035018   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:41.035029   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:41.035056   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:41.035129   46866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.no-preload-143651 san=[192.168.61.162 192.168.61.162 localhost 127.0.0.1 minikube no-preload-143651]
	I1205 20:51:41.152743   46866 provision.go:172] copyRemoteCerts
	I1205 20:51:41.152808   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:41.152836   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.155830   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156153   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.156181   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156380   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.156587   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.156769   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.156914   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.247182   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 20:51:41.271756   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:41.296485   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:41.317870   46866 provision.go:86] duration metric: configureAuth took 290.041804ms
	I1205 20:51:41.317900   46866 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:41.318059   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:51:41.318130   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.320631   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.320907   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.320935   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.321099   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.321310   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321436   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321558   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.321671   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.321981   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.321998   46866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:41.637500   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:41.637536   46866 machine.go:91] provisioned docker machine in 878.607379ms
	I1205 20:51:41.637551   46866 start.go:300] post-start starting for "no-preload-143651" (driver="kvm2")
	I1205 20:51:41.637565   46866 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:41.637586   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.637928   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:41.637959   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.640546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.640941   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.640969   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.641158   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.641348   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.641521   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.641701   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.733255   46866 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:41.737558   46866 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:41.737582   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:41.737656   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:41.737747   46866 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:41.737867   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:41.747400   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:41.769318   46866 start.go:303] post-start completed in 131.753103ms
	I1205 20:51:41.769341   46866 fix.go:56] fixHost completed within 19.577961747s
	I1205 20:51:41.769360   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.772098   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772433   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.772469   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772614   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.772830   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773000   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773141   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.773329   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.773689   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.773701   46866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:41.890932   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809501.865042950
	
	I1205 20:51:41.890965   46866 fix.go:206] guest clock: 1701809501.865042950
	I1205 20:51:41.890977   46866 fix.go:219] Guest: 2023-12-05 20:51:41.86504295 +0000 UTC Remote: 2023-12-05 20:51:41.769344785 +0000 UTC m=+276.111345943 (delta=95.698165ms)
	I1205 20:51:41.891000   46866 fix.go:190] guest clock delta is within tolerance: 95.698165ms
	I1205 20:51:41.891005   46866 start.go:83] releasing machines lock for "no-preload-143651", held for 19.699651094s
	I1205 20:51:41.891037   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.891349   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.893760   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894151   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.894188   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894393   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.894953   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895147   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895233   46866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:41.895275   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.895379   46866 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:41.895409   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.897961   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898107   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898353   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898396   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898610   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898663   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898693   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898781   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.898835   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.899138   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.899149   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.899296   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.987662   46866 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:42.008983   46866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:42.150028   46866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:42.156643   46866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:42.156719   46866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:42.175508   46866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:42.175534   46866 start.go:475] detecting cgroup driver to use...
	I1205 20:51:42.175620   46866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:42.189808   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:42.202280   46866 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:42.202342   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:42.220906   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:42.238796   46866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:42.364162   46866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:42.493990   46866 docker.go:219] disabling docker service ...
	I1205 20:51:42.494066   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:42.507419   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:42.519769   46866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:42.639608   46866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:42.764015   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:42.776984   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:42.797245   46866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:51:42.797307   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.807067   46866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:42.807150   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.816699   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.825896   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.835144   46866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:42.844910   46866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:42.853054   46866 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:42.853127   46866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:42.865162   46866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:42.874929   46866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:42.989397   46866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:43.173537   46866 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:43.173613   46866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:43.179392   46866 start.go:543] Will wait 60s for crictl version
	I1205 20:51:43.179449   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.183693   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:43.233790   46866 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:43.233862   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.291711   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.343431   46866 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1205 20:51:38.658807   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.658875   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.672580   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.159258   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.159363   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.172800   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.659451   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.659544   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.673718   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.159346   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.159436   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.172524   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.659093   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.659170   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.671848   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.159453   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.159534   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.171845   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.659456   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.659520   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.671136   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:42.136008   46700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:51:42.136039   46700 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:51:42.136049   46700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:51:42.136130   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:42.183279   46700 cri.go:89] found id: ""
	I1205 20:51:42.183375   46700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:51:42.202550   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:51:42.213978   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:51:42.214041   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223907   46700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223932   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:42.349280   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.257422   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.483371   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.345205   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:43.348398   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348738   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:43.348769   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348965   46866 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:43.354536   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:43.368512   46866 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 20:51:43.368550   46866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:43.411924   46866 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1205 20:51:43.411956   46866 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:43.412050   46866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.412030   46866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.412084   46866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.412097   46866 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1205 20:51:43.412134   46866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.412072   46866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.412021   46866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.412056   46866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413334   46866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.413403   46866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413481   46866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.413539   46866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.413554   46866 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1205 20:51:43.413337   46866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.413624   46866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.413405   46866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.563942   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.565063   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.567071   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.572782   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.577279   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.579820   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1205 20:51:43.591043   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.735723   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.735988   46866 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1205 20:51:43.736032   46866 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.736073   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.791375   46866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1205 20:51:43.791424   46866 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.791473   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.810236   46866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1205 20:51:43.810290   46866 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.810339   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841046   46866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1205 20:51:43.841255   46866 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.841347   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841121   46866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1205 20:51:43.841565   46866 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.841635   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866289   46866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1205 20:51:43.866344   46866 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.866368   46866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:51:43.866390   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866417   46866 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.866465   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866469   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.866597   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.866685   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.866780   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.866853   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.994581   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994691   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994757   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1205 20:51:43.994711   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.994792   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.994849   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:44.000411   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.000501   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.008960   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1205 20:51:44.009001   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:44.073217   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073238   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073275   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1205 20:51:44.073282   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073304   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073376   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:51:44.073397   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073439   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073444   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:44.073471   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1205 20:51:44.073504   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1205 20:51:41.918223   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Start
	I1205 20:51:41.918414   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring networks are active...
	I1205 20:51:41.919085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network default is active
	I1205 20:51:41.919401   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network mk-default-k8s-diff-port-463614 is active
	I1205 20:51:41.919733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Getting domain xml...
	I1205 20:51:41.920368   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Creating domain...
	I1205 20:51:43.304717   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting to get IP...
	I1205 20:51:43.305837   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.306202   47900 retry.go:31] will retry after 208.55347ms: waiting for machine to come up
	I1205 20:51:43.516782   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517269   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517297   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.517232   47900 retry.go:31] will retry after 370.217439ms: waiting for machine to come up
	I1205 20:51:43.889085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889580   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889615   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.889531   47900 retry.go:31] will retry after 395.420735ms: waiting for machine to come up
	I1205 20:51:44.286007   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286563   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.286481   47900 retry.go:31] will retry after 437.496548ms: waiting for machine to come up
	I1205 20:51:44.726145   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726803   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726850   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.726748   47900 retry.go:31] will retry after 628.791518ms: waiting for machine to come up
	I1205 20:51:45.357823   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358285   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:45.358232   47900 retry.go:31] will retry after 661.164562ms: waiting for machine to come up
	I1205 20:51:46.021711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022151   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022177   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:46.022120   47900 retry.go:31] will retry after 1.093521736s: waiting for machine to come up
	I1205 20:51:43.607841   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.765000   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:51:43.765097   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:43.776916   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.306400   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.805894   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.305832   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.332834   46700 api_server.go:72] duration metric: took 1.567832932s to wait for apiserver process to appear ...
	I1205 20:51:45.332867   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:51:45.332884   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:46.537183   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.463870183s)
	I1205 20:51:46.537256   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1205 20:51:46.537311   46866 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:46.537336   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.46384231s)
	I1205 20:51:46.537260   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.463842778s)
	I1205 20:51:46.537373   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:51:46.537394   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1205 20:51:46.537411   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:50.326248   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.788789868s)
	I1205 20:51:50.326299   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1205 20:51:50.326337   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:50.326419   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:47.117386   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117831   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:47.117800   47900 retry.go:31] will retry after 1.255113027s: waiting for machine to come up
	I1205 20:51:48.375199   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375692   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:48.375655   47900 retry.go:31] will retry after 1.65255216s: waiting for machine to come up
	I1205 20:51:50.029505   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029904   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029933   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:50.029860   47900 retry.go:31] will retry after 2.072960988s: waiting for machine to come up
	I1205 20:51:50.334417   46700 api_server.go:269] stopped: https://192.168.50.116:8443/healthz: Get "https://192.168.50.116:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:51:50.334459   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.286979   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:51:52.287013   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:51:52.787498   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.871766   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:52.871803   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.287974   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.301921   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:53.301962   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthoo
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:51:34 UTC, ends at Tue 2023-12-05 21:11:08 UTC. --
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.097199501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810668097188407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=fd3895ff-c4ac-4bd2-9f9b-ee143a3af054 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.097953839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3a9db8ef-e4cf-415c-9ef3-63f115d82865 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.098000408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3a9db8ef-e4cf-415c-9ef3-63f115d82865 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.098148574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3a9db8ef-e4cf-415c-9ef3-63f115d82865 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.139617923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bddcf0eb-b9ea-4be4-ba31-e1313f71fd6e name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.139810788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bddcf0eb-b9ea-4be4-ba31-e1313f71fd6e name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.141004446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3e93f44e-e9fa-462f-919e-f753b1ec26ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.141487692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810668141464711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=3e93f44e-e9fa-462f-919e-f753b1ec26ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.147110356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=944af359-2dd1-4ac1-a9a6-3ccf7f80f701 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.147191602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=944af359-2dd1-4ac1-a9a6-3ccf7f80f701 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.147334891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=944af359-2dd1-4ac1-a9a6-3ccf7f80f701 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.192121829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b6ff78d7-1bab-499b-9f6a-b07f36b76a4b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.192192718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b6ff78d7-1bab-499b-9f6a-b07f36b76a4b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.193987087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c6684cd7-3191-4d24-b925-6f8b3bd92a1b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.194413366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810668194392182,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c6684cd7-3191-4d24-b925-6f8b3bd92a1b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.195140687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ea22960-c3ea-4412-8ded-3d9c993e0f1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.195220678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ea22960-c3ea-4412-8ded-3d9c993e0f1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.195387514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ea22960-c3ea-4412-8ded-3d9c993e0f1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.238000113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=77e8f39e-61f5-421d-8f72-4c0b85912cf5 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.238063397Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=77e8f39e-61f5-421d-8f72-4c0b85912cf5 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.239485672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=18ad9a4f-78d2-471c-ab4b-9fd42d30fe7f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.240068159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810668240051612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=18ad9a4f-78d2-471c-ab4b-9fd42d30fe7f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.240763385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7ed4fb7a-e804-40f8-834c-65b1901ddd33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.240813507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7ed4fb7a-e804-40f8-834c-65b1901ddd33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:08 no-preload-143651 crio[706]: time="2023-12-05 21:11:08.240962958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf,PodSandboxId:a4cf96b4b71faff4fef6133648a679f74b8a506ef609a556fa4748e91445ba21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701809835181257526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70819185-f661-434d-b039-e8b822dbc886,},Annotations:map[string]string{io.kubernetes.container.hash: 62300f07,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999,PodSandboxId:10d4291f05a4e402b150444bfcf2a4ac1af2d8d7c8f430a20ffab8858f27323c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701809834644740358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4n2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a90349b-f4fa-413d-b2fb-8672988095af,},Annotations:map[string]string{io.kubernetes.container.hash: c923b3a7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739,PodSandboxId:c91321d2a1ba8996ac78d9376f626d67ecf340e8720dbee3670be02c029d7d75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:10504e3918d5c118ab4ecc36cd79c1b3d37825111bb19ff9649d823c6048e208,State:CONTAINER_RUNNING,CreatedAt:1701809833196826270,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6txsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: ce2eae51-b812-4cde-a012-1d0b53607ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 5327a75b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2,PodSandboxId:525b07ad59b91cbb4eed9f3d66488d8a41bfbccf8be82aa1769162c1bdbb9ac9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701809810449991678,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55a40cb5f0f0e381424f71c21a77c609,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b983ab5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc,PodSandboxId:dab0b07edd9522b4f468be801142868d4cd45a57c3fdcdc30322a6abb0ec368b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:392ed8553c3109e2b84c9156b8908ef637d480b377a06656dc3f6c55252f0f31,State:CONTAINER_RUNNING,CreatedAt:1701809810267144623,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59c222c25b6e78581c39c000c20a240,},Annotations:map
[string]string{io.kubernetes.container.hash: 6e0ac30e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1,PodSandboxId:f8a1fe2755ce18630d3426ef5bce0f94a0f9ff5bfe49e0daed946324a1ee9a37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:5f0b6e97e1c7566418dcae71143fdcfcc27c89c20f05f8f4a6c0a59c05bf62e5,State:CONTAINER_RUNNING,CreatedAt:1701809810218225164,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9246bc0f046ab304f60d38907
3024f10,},Annotations:map[string]string{io.kubernetes.container.hash: 7d9f5f80,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172,PodSandboxId:f64d6e78b581cb6558cf1ecbbf3de3b0fd9fd2c4f93f958b1acbd8f14464a4b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9b5559bc9bb852fd4652513cc0d9e3992581e6c772e01d189a1803fce3912e0,State:CONTAINER_RUNNING,CreatedAt:1701809809973459743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-143651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6becf830e220a43860b7532b74f7c2,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1576a6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7ed4fb7a-e804-40f8-834c-65b1901ddd33 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	608d2cd91d615       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   a4cf96b4b71fa       storage-provisioner
	a91f9eeb6cc7b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   10d4291f05a4e       coredns-76f75df574-4n2wg
	88a3c5c33dce7       86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff   13 minutes ago      Running             kube-proxy                0                   c91321d2a1ba8       kube-proxy-6txsz
	0c9415fdfc010       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   525b07ad59b91       etcd-no-preload-143651
	9c8a5763db1e6       5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956   14 minutes ago      Running             kube-apiserver            2                   dab0b07edd952       kube-apiserver-no-preload-143651
	8a7cfeb23c032       b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09   14 minutes ago      Running             kube-controller-manager   2                   f8a1fe2755ce1       kube-controller-manager-no-preload-143651
	a31fcd9330606       b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542   14 minutes ago      Running             kube-scheduler            2                   f64d6e78b581c       kube-scheduler-no-preload-143651
	
	* 
	* ==> coredns [a91f9eeb6cc7b9613c29ac85262f92fdfe73fec499bd7f030c5dc6bcaa6d8999] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38467 - 37436 "HINFO IN 3778141838030031282.8307257075047644438. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010644248s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-143651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-143651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=no-preload-143651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:56:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-143651
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 21:11:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:07:32 +0000   Tue, 05 Dec 2023 20:56:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:07:32 +0000   Tue, 05 Dec 2023 20:56:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:07:32 +0000   Tue, 05 Dec 2023 20:56:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:07:32 +0000   Tue, 05 Dec 2023 20:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.162
	  Hostname:    no-preload-143651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a90a425d63b6431e94a42f715d9da1ce
	  System UUID:                a90a425d-63b6-431e-94a4-2f715d9da1ce
	  Boot ID:                    c0f23393-24ab-4ed0-8ede-e74c7715efea
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.1
	  Kube-Proxy Version:         v1.29.0-rc.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4n2wg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-143651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-143651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-143651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6txsz                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-143651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-xwfpm              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-143651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-143651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-143651 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-143651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-143651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-143651 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-143651 event: Registered Node no-preload-143651 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.079828] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.550482] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.627147] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154622] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000007] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.494588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.245976] systemd-fstab-generator[631]: Ignoring "noauto" for root device
	[  +0.134158] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.145679] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.128557] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.226448] systemd-fstab-generator[690]: Ignoring "noauto" for root device
	[Dec 5 20:52] systemd-fstab-generator[1317]: Ignoring "noauto" for root device
	[ +19.563493] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 5 20:56] systemd-fstab-generator[3935]: Ignoring "noauto" for root device
	[  +9.850746] systemd-fstab-generator[4264]: Ignoring "noauto" for root device
	[Dec 5 20:57] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [0c9415fdfc010ded5ad728d02d9929a7081a759130b5301a51c81169047b06b2] <==
	* {"level":"info","ts":"2023-12-05T20:56:52.56596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T20:56:52.565985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 received MsgPreVoteResp from cecb7c331cf85085 at term 1"}
	{"level":"info","ts":"2023-12-05T20:56:52.565998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.566011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 received MsgVoteResp from cecb7c331cf85085 at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.566028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cecb7c331cf85085 became leader at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.566035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cecb7c331cf85085 elected leader cecb7c331cf85085 at term 2"}
	{"level":"info","ts":"2023-12-05T20:56:52.567602Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cecb7c331cf85085","local-member-attributes":"{Name:no-preload-143651 ClientURLs:[https://192.168.61.162:2379]}","request-path":"/0/members/cecb7c331cf85085/attributes","cluster-id":"eabf72ed03489de5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:56:52.567817Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.575812Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"eabf72ed03489de5","local-member-id":"cecb7c331cf85085","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.575951Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.576003Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:56:52.583935Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-05T20:56:52.584864Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.162:2380"}
	{"level":"info","ts":"2023-12-05T20:56:52.584916Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.162:2380"}
	{"level":"info","ts":"2023-12-05T20:56:52.585547Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:56:52.589373Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:56:52.592978Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"cecb7c331cf85085","initial-advertise-peer-urls":["https://192.168.61.162:2380"],"listen-peer-urls":["https://192.168.61.162:2380"],"advertise-client-urls":["https://192.168.61.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-05T20:56:52.59356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.162:2379"}
	{"level":"info","ts":"2023-12-05T20:56:52.595977Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:56:52.597501Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:56:52.601809Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:56:52.601858Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-05T21:06:53.23695Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":713}
	{"level":"info","ts":"2023-12-05T21:06:53.240157Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":713,"took":"2.368054ms","hash":2939929765}
	{"level":"info","ts":"2023-12-05T21:06:53.240259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2939929765,"revision":713,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  21:11:08 up 19 min,  0 users,  load average: 0.08, 0.17, 0.22
	Linux no-preload-143651 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9c8a5763db1e638d6aeaea7aca1d7c2cf1730b2a2ec01c7878e589182491dccc] <==
	* I1205 21:04:55.869348       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:06:54.873652       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:06:54.874812       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1205 21:06:55.875438       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:06:55.875516       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:06:55.875527       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:06:55.875593       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:06:55.875748       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:06:55.876781       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:07:55.876485       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:07:55.876723       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:07:55.876735       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:07:55.877818       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:07:55.878024       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:07:55.878075       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:09:55.877139       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:09:55.877500       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:09:55.877534       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:09:55.878312       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:09:55.878456       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:09:55.879749       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8a7cfeb23c032c1569d922f14c599884fe294d6841a4d708130c88dc7d5977a1] <==
	* I1205 21:05:11.987747       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:05:41.599813       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:05:41.999095       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:11.606936       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:12.013971       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:41.612359       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:42.023167       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:07:11.618938       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:07:12.033286       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:07:41.630465       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:07:42.041483       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:08:11.637476       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:08:12.050291       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:08:29.835354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="305.807µs"
	E1205 21:08:41.643312       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:08:42.060288       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:08:42.831078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="74.921µs"
	E1205 21:09:11.648312       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:09:12.069507       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:09:41.655356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:09:42.077866       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:10:11.662313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:10:12.087316       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:10:41.667743       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:10:42.097731       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [88a3c5c33dce79efcaef17f2244428034677b1f2952065cc8ba6256678b6e739] <==
	* I1205 20:57:13.714293       1 server_others.go:72] "Using iptables proxy"
	I1205 20:57:13.744597       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.162"]
	I1205 20:57:14.715569       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:57:14.718202       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:57:14.718320       1 server_others.go:168] "Using iptables Proxier"
	I1205 20:57:14.732750       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:57:14.732944       1 server.go:865] "Version info" version="v1.29.0-rc.1"
	I1205 20:57:14.732956       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:57:14.735530       1 config.go:188] "Starting service config controller"
	I1205 20:57:14.735584       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:57:14.735605       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:57:14.735609       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:57:14.736265       1 config.go:315] "Starting node config controller"
	I1205 20:57:14.736305       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:57:14.836814       1 shared_informer.go:318] Caches are synced for node config
	I1205 20:57:14.836903       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:57:14.836913       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a31fcd933060678dec13d055950b60aae2ff3ae7bc3f9852fd7cc4c0937db172] <==
	* W1205 20:56:54.913389       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:56:54.913442       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:56:54.913926       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:54.913973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:55.715552       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:56:55.715618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:56:55.813932       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:56:55.814047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 20:56:55.862360       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:55.862519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:55.898489       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:56:55.898598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 20:56:55.908328       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:55.908410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:55.962010       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:55.962194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:56.080019       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:56:56.080187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 20:56:56.156106       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:56:56.156277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 20:56:56.168790       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:56:56.168965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 20:56:56.433965       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:56:56.434115       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1205 20:56:59.281655       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:51:34 UTC, ends at Tue 2023-12-05 21:11:08 UTC. --
	Dec 05 21:08:18 no-preload-143651 kubelet[4271]: E1205 21:08:18.825954    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:08:29 no-preload-143651 kubelet[4271]: E1205 21:08:29.812174    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:08:42 no-preload-143651 kubelet[4271]: E1205 21:08:42.812061    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:08:57 no-preload-143651 kubelet[4271]: E1205 21:08:57.811868    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:08:58 no-preload-143651 kubelet[4271]: E1205 21:08:58.930240    4271 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:08:58 no-preload-143651 kubelet[4271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:08:58 no-preload-143651 kubelet[4271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:08:58 no-preload-143651 kubelet[4271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:09:11 no-preload-143651 kubelet[4271]: E1205 21:09:11.811248    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:09:26 no-preload-143651 kubelet[4271]: E1205 21:09:26.812079    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:09:37 no-preload-143651 kubelet[4271]: E1205 21:09:37.811883    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:09:51 no-preload-143651 kubelet[4271]: E1205 21:09:51.814139    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:09:58 no-preload-143651 kubelet[4271]: E1205 21:09:58.931398    4271 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:09:58 no-preload-143651 kubelet[4271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:09:58 no-preload-143651 kubelet[4271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:09:58 no-preload-143651 kubelet[4271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:10:06 no-preload-143651 kubelet[4271]: E1205 21:10:06.811780    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:10:19 no-preload-143651 kubelet[4271]: E1205 21:10:19.812303    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:10:31 no-preload-143651 kubelet[4271]: E1205 21:10:31.811312    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:10:44 no-preload-143651 kubelet[4271]: E1205 21:10:44.811126    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	Dec 05 21:10:58 no-preload-143651 kubelet[4271]: E1205 21:10:58.930965    4271 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:10:58 no-preload-143651 kubelet[4271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:10:58 no-preload-143651 kubelet[4271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:10:58 no-preload-143651 kubelet[4271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:10:59 no-preload-143651 kubelet[4271]: E1205 21:10:59.810901    4271 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-xwfpm" podUID="76fbd532-715f-49fd-942d-33a312fb566c"
	
	* 
	* ==> storage-provisioner [608d2cd91d615fc730d06500850e418ea0b9aac46a827a317b1775f4db3c3ccf] <==
	* I1205 20:57:15.310427       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:57:15.328066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:57:15.328254       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:57:15.341635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:57:15.341871       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc8db603-0517-40d4-ba16-6f4b1b6d55f1", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-143651_893a5e49-eb0b-475c-9d51-4ca0924c3fe6 became leader
	I1205 20:57:15.342550       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-143651_893a5e49-eb0b-475c-9d51-4ca0924c3fe6!
	I1205 20:57:15.443764       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-143651_893a5e49-eb0b-475c-9d51-4ca0924c3fe6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-143651 -n no-preload-143651
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-143651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-xwfpm
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-143651 describe pod metrics-server-57f55c9bc5-xwfpm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-143651 describe pod metrics-server-57f55c9bc5-xwfpm: exit status 1 (76.930899ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-xwfpm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-143651 describe pod metrics-server-57f55c9bc5-xwfpm: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (289.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (272.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-331495 -n embed-certs-331495
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:11:23.719037701 +0000 UTC m=+5796.531497335
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-331495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-331495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.733µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-331495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-331495 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-331495 logs -n 25: (1.276625592s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 21:11 UTC | 05 Dec 23 21:11 UTC |
	| start   | -p newest-cni-051721 --memory=2200 --alsologtostderr   | newest-cni-051721            | jenkins | v1.32.0 | 05 Dec 23 21:11 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 21:11 UTC | 05 Dec 23 21:11 UTC |
	| start   | -p auto-855101 --memory=3072                           | auto-855101                  | jenkins | v1.32.0 | 05 Dec 23 21:11 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 21:11:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:11:10.670339   52289 out.go:296] Setting OutFile to fd 1 ...
	I1205 21:11:10.670483   52289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 21:11:10.670491   52289 out.go:309] Setting ErrFile to fd 2...
	I1205 21:11:10.670496   52289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 21:11:10.670720   52289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 21:11:10.671300   52289 out.go:303] Setting JSON to false
	I1205 21:11:10.672296   52289 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6824,"bootTime":1701803847,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:11:10.672360   52289 start.go:138] virtualization: kvm guest
	I1205 21:11:10.674837   52289 out.go:177] * [auto-855101] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:11:10.676480   52289 notify.go:220] Checking for updates...
	I1205 21:11:10.677975   52289 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 21:11:10.679678   52289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:11:10.681259   52289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 21:11:10.682677   52289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 21:11:10.684140   52289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:11:10.685468   52289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:11:10.687342   52289 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 21:11:10.687465   52289 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 21:11:10.687569   52289 config.go:182] Loaded profile config "newest-cni-051721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 21:11:10.687647   52289 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 21:11:10.727846   52289 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:11:10.729468   52289 start.go:298] selected driver: kvm2
	I1205 21:11:10.729487   52289 start.go:902] validating driver "kvm2" against <nil>
	I1205 21:11:10.729497   52289 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:11:10.730191   52289 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:11:10.730340   52289 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:11:10.746580   52289 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 21:11:10.746628   52289 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 21:11:10.746827   52289 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:11:10.746885   52289 cni.go:84] Creating CNI manager for ""
	I1205 21:11:10.746897   52289 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:11:10.746909   52289 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 21:11:10.746918   52289 start_flags.go:323] config:
	{Name:auto-855101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-855101 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 21:11:10.747083   52289 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:11:10.750124   52289 out.go:177] * Starting control plane node auto-855101 in cluster auto-855101
	I1205 21:11:07.774629   52033 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 21:11:07.774754   52033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:11:07.774805   52033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:11:07.788876   52033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35849
	I1205 21:11:07.789392   52033 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:11:07.789991   52033 main.go:141] libmachine: Using API Version  1
	I1205 21:11:07.790017   52033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:11:07.790429   52033 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:11:07.790649   52033 main.go:141] libmachine: (newest-cni-051721) Calling .GetMachineName
	I1205 21:11:07.790809   52033 main.go:141] libmachine: (newest-cni-051721) Calling .DriverName
	I1205 21:11:07.790939   52033 start.go:159] libmachine.API.Create for "newest-cni-051721" (driver="kvm2")
	I1205 21:11:07.790970   52033 client.go:168] LocalClient.Create starting
	I1205 21:11:07.791004   52033 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem
	I1205 21:11:07.791052   52033 main.go:141] libmachine: Decoding PEM data...
	I1205 21:11:07.791077   52033 main.go:141] libmachine: Parsing certificate...
	I1205 21:11:07.791146   52033 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem
	I1205 21:11:07.791174   52033 main.go:141] libmachine: Decoding PEM data...
	I1205 21:11:07.791196   52033 main.go:141] libmachine: Parsing certificate...
	I1205 21:11:07.791226   52033 main.go:141] libmachine: Running pre-create checks...
	I1205 21:11:07.791247   52033 main.go:141] libmachine: (newest-cni-051721) Calling .PreCreateCheck
	I1205 21:11:07.791617   52033 main.go:141] libmachine: (newest-cni-051721) Calling .GetConfigRaw
	I1205 21:11:07.792045   52033 main.go:141] libmachine: Creating machine...
	I1205 21:11:07.792060   52033 main.go:141] libmachine: (newest-cni-051721) Calling .Create
	I1205 21:11:07.792180   52033 main.go:141] libmachine: (newest-cni-051721) Creating KVM machine...
	I1205 21:11:07.793404   52033 main.go:141] libmachine: (newest-cni-051721) DBG | found existing default KVM network
	I1205 21:11:07.794625   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:07.794495   52067 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3d:31:72} reservation:<nil>}
	I1205 21:11:07.795701   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:07.795607   52067 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d60b0}
	I1205 21:11:07.800942   52033 main.go:141] libmachine: (newest-cni-051721) DBG | trying to create private KVM network mk-newest-cni-051721 192.168.50.0/24...
	I1205 21:11:07.881225   52033 main.go:141] libmachine: (newest-cni-051721) DBG | private KVM network mk-newest-cni-051721 192.168.50.0/24 created
	I1205 21:11:07.881255   52033 main.go:141] libmachine: (newest-cni-051721) Setting up store path in /home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721 ...
	I1205 21:11:07.881285   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:07.881223   52067 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 21:11:07.881309   52033 main.go:141] libmachine: (newest-cni-051721) Building disk image from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 21:11:07.881330   52033 main.go:141] libmachine: (newest-cni-051721) Downloading /home/jenkins/minikube-integration/17731-6237/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso...
	I1205 21:11:08.099216   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:08.099067   52067 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721/id_rsa...
	I1205 21:11:08.183678   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:08.183530   52067 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721/newest-cni-051721.rawdisk...
	I1205 21:11:08.183714   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Writing magic tar header
	I1205 21:11:08.183740   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Writing SSH key tar header
	I1205 21:11:08.183836   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:08.183758   52067 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721 ...
	I1205 21:11:08.183995   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721
	I1205 21:11:08.184024   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube/machines
	I1205 21:11:08.184039   52033 main.go:141] libmachine: (newest-cni-051721) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721 (perms=drwx------)
	I1205 21:11:08.184057   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 21:11:08.184077   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17731-6237
	I1205 21:11:08.184092   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 21:11:08.184115   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home/jenkins
	I1205 21:11:08.184138   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Checking permissions on dir: /home
	I1205 21:11:08.184157   52033 main.go:141] libmachine: (newest-cni-051721) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube/machines (perms=drwxr-xr-x)
	I1205 21:11:08.184184   52033 main.go:141] libmachine: (newest-cni-051721) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237/.minikube (perms=drwxr-xr-x)
	I1205 21:11:08.184200   52033 main.go:141] libmachine: (newest-cni-051721) Setting executable bit set on /home/jenkins/minikube-integration/17731-6237 (perms=drwxrwxr-x)
	I1205 21:11:08.184224   52033 main.go:141] libmachine: (newest-cni-051721) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 21:11:08.184244   52033 main.go:141] libmachine: (newest-cni-051721) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 21:11:08.184258   52033 main.go:141] libmachine: (newest-cni-051721) DBG | Skipping /home - not owner
	I1205 21:11:08.184275   52033 main.go:141] libmachine: (newest-cni-051721) Creating domain...
	I1205 21:11:08.185383   52033 main.go:141] libmachine: (newest-cni-051721) define libvirt domain using xml: 
	I1205 21:11:08.185409   52033 main.go:141] libmachine: (newest-cni-051721) <domain type='kvm'>
	I1205 21:11:08.185434   52033 main.go:141] libmachine: (newest-cni-051721)   <name>newest-cni-051721</name>
	I1205 21:11:08.185450   52033 main.go:141] libmachine: (newest-cni-051721)   <memory unit='MiB'>2200</memory>
	I1205 21:11:08.185462   52033 main.go:141] libmachine: (newest-cni-051721)   <vcpu>2</vcpu>
	I1205 21:11:08.185473   52033 main.go:141] libmachine: (newest-cni-051721)   <features>
	I1205 21:11:08.185480   52033 main.go:141] libmachine: (newest-cni-051721)     <acpi/>
	I1205 21:11:08.185487   52033 main.go:141] libmachine: (newest-cni-051721)     <apic/>
	I1205 21:11:08.185493   52033 main.go:141] libmachine: (newest-cni-051721)     <pae/>
	I1205 21:11:08.185501   52033 main.go:141] libmachine: (newest-cni-051721)     
	I1205 21:11:08.185507   52033 main.go:141] libmachine: (newest-cni-051721)   </features>
	I1205 21:11:08.185517   52033 main.go:141] libmachine: (newest-cni-051721)   <cpu mode='host-passthrough'>
	I1205 21:11:08.185530   52033 main.go:141] libmachine: (newest-cni-051721)   
	I1205 21:11:08.185543   52033 main.go:141] libmachine: (newest-cni-051721)   </cpu>
	I1205 21:11:08.185830   52033 main.go:141] libmachine: (newest-cni-051721)   <os>
	I1205 21:11:08.185876   52033 main.go:141] libmachine: (newest-cni-051721)     <type>hvm</type>
	I1205 21:11:08.186025   52033 main.go:141] libmachine: (newest-cni-051721)     <boot dev='cdrom'/>
	I1205 21:11:08.186045   52033 main.go:141] libmachine: (newest-cni-051721)     <boot dev='hd'/>
	I1205 21:11:08.186057   52033 main.go:141] libmachine: (newest-cni-051721)     <bootmenu enable='no'/>
	I1205 21:11:08.186066   52033 main.go:141] libmachine: (newest-cni-051721)   </os>
	I1205 21:11:08.186081   52033 main.go:141] libmachine: (newest-cni-051721)   <devices>
	I1205 21:11:08.186104   52033 main.go:141] libmachine: (newest-cni-051721)     <disk type='file' device='cdrom'>
	I1205 21:11:08.186124   52033 main.go:141] libmachine: (newest-cni-051721)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721/boot2docker.iso'/>
	I1205 21:11:08.186133   52033 main.go:141] libmachine: (newest-cni-051721)       <target dev='hdc' bus='scsi'/>
	I1205 21:11:08.186144   52033 main.go:141] libmachine: (newest-cni-051721)       <readonly/>
	I1205 21:11:08.186156   52033 main.go:141] libmachine: (newest-cni-051721)     </disk>
	I1205 21:11:08.186168   52033 main.go:141] libmachine: (newest-cni-051721)     <disk type='file' device='disk'>
	I1205 21:11:08.186179   52033 main.go:141] libmachine: (newest-cni-051721)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 21:11:08.186198   52033 main.go:141] libmachine: (newest-cni-051721)       <source file='/home/jenkins/minikube-integration/17731-6237/.minikube/machines/newest-cni-051721/newest-cni-051721.rawdisk'/>
	I1205 21:11:08.186208   52033 main.go:141] libmachine: (newest-cni-051721)       <target dev='hda' bus='virtio'/>
	I1205 21:11:08.186222   52033 main.go:141] libmachine: (newest-cni-051721)     </disk>
	I1205 21:11:08.186231   52033 main.go:141] libmachine: (newest-cni-051721)     <interface type='network'>
	I1205 21:11:08.186243   52033 main.go:141] libmachine: (newest-cni-051721)       <source network='mk-newest-cni-051721'/>
	I1205 21:11:08.186256   52033 main.go:141] libmachine: (newest-cni-051721)       <model type='virtio'/>
	I1205 21:11:08.186306   52033 main.go:141] libmachine: (newest-cni-051721)     </interface>
	I1205 21:11:08.186320   52033 main.go:141] libmachine: (newest-cni-051721)     <interface type='network'>
	I1205 21:11:08.186336   52033 main.go:141] libmachine: (newest-cni-051721)       <source network='default'/>
	I1205 21:11:08.186345   52033 main.go:141] libmachine: (newest-cni-051721)       <model type='virtio'/>
	I1205 21:11:08.186360   52033 main.go:141] libmachine: (newest-cni-051721)     </interface>
	I1205 21:11:08.186369   52033 main.go:141] libmachine: (newest-cni-051721)     <serial type='pty'>
	I1205 21:11:08.186380   52033 main.go:141] libmachine: (newest-cni-051721)       <target port='0'/>
	I1205 21:11:08.186389   52033 main.go:141] libmachine: (newest-cni-051721)     </serial>
	I1205 21:11:08.186403   52033 main.go:141] libmachine: (newest-cni-051721)     <console type='pty'>
	I1205 21:11:08.186413   52033 main.go:141] libmachine: (newest-cni-051721)       <target type='serial' port='0'/>
	I1205 21:11:08.186428   52033 main.go:141] libmachine: (newest-cni-051721)     </console>
	I1205 21:11:08.186437   52033 main.go:141] libmachine: (newest-cni-051721)     <rng model='virtio'>
	I1205 21:11:08.186449   52033 main.go:141] libmachine: (newest-cni-051721)       <backend model='random'>/dev/random</backend>
	I1205 21:11:08.186462   52033 main.go:141] libmachine: (newest-cni-051721)     </rng>
	I1205 21:11:08.186471   52033 main.go:141] libmachine: (newest-cni-051721)     
	I1205 21:11:08.186479   52033 main.go:141] libmachine: (newest-cni-051721)     
	I1205 21:11:08.186493   52033 main.go:141] libmachine: (newest-cni-051721)   </devices>
	I1205 21:11:08.186501   52033 main.go:141] libmachine: (newest-cni-051721) </domain>
	I1205 21:11:08.186520   52033 main.go:141] libmachine: (newest-cni-051721) 
	I1205 21:11:08.191577   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:55:50:e6 in network default
	I1205 21:11:08.192249   52033 main.go:141] libmachine: (newest-cni-051721) Ensuring networks are active...
	I1205 21:11:08.192273   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:08.193123   52033 main.go:141] libmachine: (newest-cni-051721) Ensuring network default is active
	I1205 21:11:08.193466   52033 main.go:141] libmachine: (newest-cni-051721) Ensuring network mk-newest-cni-051721 is active
	I1205 21:11:08.194141   52033 main.go:141] libmachine: (newest-cni-051721) Getting domain xml...
	I1205 21:11:08.194883   52033 main.go:141] libmachine: (newest-cni-051721) Creating domain...
	I1205 21:11:09.628617   52033 main.go:141] libmachine: (newest-cni-051721) Waiting to get IP...
	I1205 21:11:09.629388   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:09.629950   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:09.629983   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:09.629904   52067 retry.go:31] will retry after 275.908515ms: waiting for machine to come up
	I1205 21:11:10.358146   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:10.362678   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:10.362714   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:10.362631   52067 retry.go:31] will retry after 320.424154ms: waiting for machine to come up
	I1205 21:11:10.685174   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:10.685776   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:10.685810   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:10.685749   52067 retry.go:31] will retry after 358.82092ms: waiting for machine to come up
	I1205 21:11:11.046642   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:11.047275   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:11.047302   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:11.047220   52067 retry.go:31] will retry after 450.683871ms: waiting for machine to come up
	I1205 21:11:11.499864   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:11.500371   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:11.500400   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:11.500319   52067 retry.go:31] will retry after 476.542182ms: waiting for machine to come up
	I1205 21:11:11.977872   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:11.978396   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:11.978451   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:11.978348   52067 retry.go:31] will retry after 852.359228ms: waiting for machine to come up
	I1205 21:11:10.751646   52289 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 21:11:10.751707   52289 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 21:11:10.751714   52289 cache.go:56] Caching tarball of preloaded images
	I1205 21:11:10.751821   52289 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:11:10.751834   52289 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 21:11:10.751935   52289 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/config.json ...
	I1205 21:11:10.751952   52289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/config.json: {Name:mk3d9a889199271209489616640030c5c555e4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:11:10.752102   52289 start.go:365] acquiring machines lock for auto-855101: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:11:12.831945   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:12.832456   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:12.832478   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:12.832403   52067 retry.go:31] will retry after 1.132486633s: waiting for machine to come up
	I1205 21:11:13.967971   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:13.968571   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:13.968596   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:13.968517   52067 retry.go:31] will retry after 1.134219335s: waiting for machine to come up
	I1205 21:11:15.104745   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:15.105182   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:15.105218   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:15.105150   52067 retry.go:31] will retry after 1.237009834s: waiting for machine to come up
	I1205 21:11:16.343413   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:16.343966   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:16.344002   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:16.343905   52067 retry.go:31] will retry after 2.141158509s: waiting for machine to come up
	I1205 21:11:18.486603   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:18.487046   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:18.487080   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:18.486992   52067 retry.go:31] will retry after 2.719430545s: waiting for machine to come up
	I1205 21:11:21.209901   52033 main.go:141] libmachine: (newest-cni-051721) DBG | domain newest-cni-051721 has defined MAC address 52:54:00:09:29:b0 in network mk-newest-cni-051721
	I1205 21:11:21.210421   52033 main.go:141] libmachine: (newest-cni-051721) DBG | unable to find current IP address of domain newest-cni-051721 in network mk-newest-cni-051721
	I1205 21:11:21.210454   52033 main.go:141] libmachine: (newest-cni-051721) DBG | I1205 21:11:21.210368   52067 retry.go:31] will retry after 2.234971338s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:52:15 UTC, ends at Tue 2023-12-05 21:11:24 UTC. --
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.456972222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810684456957300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f8a190cd-1a1d-461c-a181-c84ff55d9ab4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.458399284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f63bc236-17ac-4fa5-bd06-896bdbc813cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.458648554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f63bc236-17ac-4fa5-bd06-896bdbc813cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.459232393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f63bc236-17ac-4fa5-bd06-896bdbc813cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.511413140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6a678d38-5bb8-4f6e-98d7-86903d8b5d83 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.511503257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6a678d38-5bb8-4f6e-98d7-86903d8b5d83 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.513272872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=171cf94a-5948-494d-b3e6-e56553463c1f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.513684479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810684513665684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=171cf94a-5948-494d-b3e6-e56553463c1f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.514527192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=63b97b24-1a3e-4088-bd50-76354222a886 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.514607910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=63b97b24-1a3e-4088-bd50-76354222a886 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.514789404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=63b97b24-1a3e-4088-bd50-76354222a886 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.569785358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c4c2c4ba-2a37-4077-81fb-90fe40b692ee name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.569852706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c4c2c4ba-2a37-4077-81fb-90fe40b692ee name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.570996291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3f22a652-f6ca-42b3-b7c6-125c27465225 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.571480454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810684571465458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3f22a652-f6ca-42b3-b7c6-125c27465225 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.572257371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08accf9f-dc87-4631-bcb4-8c5bccdbc586 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.572308170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08accf9f-dc87-4631-bcb4-8c5bccdbc586 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.572473477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08accf9f-dc87-4631-bcb4-8c5bccdbc586 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.612730025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dcede846-66a2-4f1d-8f7d-01eb1d333dc2 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.612790763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dcede846-66a2-4f1d-8f7d-01eb1d333dc2 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.613696120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=555c6a4e-32e8-4de8-bf76-ff6fd331f208 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.614186463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810684614159462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=555c6a4e-32e8-4de8-bf76-ff6fd331f208 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.614633212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9042a277-3182-49ee-a749-033a0fb8eeec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.614678383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9042a277-3182-49ee-a749-033a0fb8eeec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:24 embed-certs-331495 crio[715]: time="2023-12-05 21:11:24.614866770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa,PodSandboxId:de22a871f7b64fd699ee30556e1ac22986022782eda15b105867640516875c58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809868312292061,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c366deb-4564-44b8-87fe-45e03cf7a774,},Annotations:map[string]string{io.kubernetes.container.hash: 4147418,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39,PodSandboxId:0fd4abd8da6c41da2450dc0114155f06c692fde2698c6ac2f48ee436788ca45d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701809867516839519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6d7wq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4525c8a-b7e3-450f-bdb4-12dfeb0ff203,},Annotations:map[string]string{io.kubernetes.container.hash: f41b307f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815,PodSandboxId:d237f5479c8bd426fa39b00f26a036ef7cfbe0e85416f3a931cbcdbb73d59cce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701809866742956400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tbr8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 8138c69a-41ce-4880-b2ac-274dff0bdeba,},Annotations:map[string]string{io.kubernetes.container.hash: 60a79440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d,PodSandboxId:7f87da88bd930c9a332a5898feb52696508e2f87dfb0fecbea933b6f00aee195,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701809843548834929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 259a5a07e128b87f92f02686495f4d01,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be,PodSandboxId:4e711bf51b91a2338b2f48b4e2ee3809e7b11b9a95a5d69ca6445361b2303b8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701809843134803270,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a30f8b54cd5e1347171ddd536918535e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: adefc318,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3,PodSandboxId:78292bfed715ce6bce2605251c33b1203856b61e6592098ee7468da465a06a15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701809842901159626,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726b83e1c8bfc9cb126096
cbed22e824,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8,PodSandboxId:d09b800aa8f8b718b84de9fc4c675dd9fc1235ebda41722893655754e2c4c2a3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701809842837273122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-331495,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d02026c8729db0a9c315611a9ed1c4e
,},Annotations:map[string]string{io.kubernetes.container.hash: cf1c4696,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9042a277-3182-49ee-a749-033a0fb8eeec name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	182d80c604bcb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   de22a871f7b64       storage-provisioner
	bd8c3411f3f5b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   0fd4abd8da6c4       coredns-5dd5756b68-6d7wq
	0ae4b48879d4a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   d237f5479c8bd       kube-proxy-tbr8k
	d780a6357dc09       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   7f87da88bd930       kube-scheduler-embed-certs-331495
	97a6cce6fb0ca       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   4e711bf51b91a       etcd-embed-certs-331495
	3eae6f73bd78e       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   78292bfed715c       kube-controller-manager-embed-certs-331495
	dbd0f55bdb24e       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   d09b800aa8f8b       kube-apiserver-embed-certs-331495
	
	* 
	* ==> coredns [bd8c3411f3f5b0cac088c82bbbb16fcc2c113538a8d0717c235a5a2efdba6c39] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-331495
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-331495
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=embed-certs-331495
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:57:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-331495
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Dec 2023 21:11:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:08:04 +0000   Tue, 05 Dec 2023 20:57:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:08:04 +0000   Tue, 05 Dec 2023 20:57:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:08:04 +0000   Tue, 05 Dec 2023 20:57:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:08:04 +0000   Tue, 05 Dec 2023 20:57:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.180
	  Hostname:    embed-certs-331495
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 fefaa329554e4f489cf4b02aa9a4e7a7
	  System UUID:                fefaa329-554e-4f48-9cf4-b02aa9a4e7a7
	  Boot ID:                    96e331ea-2fcf-49e4-8546-22ef663c0c0b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6d7wq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-331495                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-331495             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-331495    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-tbr8k                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-331495             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-wv2t6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node embed-certs-331495 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node embed-certs-331495 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node embed-certs-331495 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node embed-certs-331495 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node embed-certs-331495 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node embed-certs-331495 event: Registered Node embed-certs-331495 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.075888] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.690420] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.721951] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150625] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.652296] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000067] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.641480] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.148003] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.180748] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.182513] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.317315] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +17.561225] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[Dec 5 20:53] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 5 20:57] systemd-fstab-generator[3522]: Ignoring "noauto" for root device
	[  +9.800395] systemd-fstab-generator[3850]: Ignoring "noauto" for root device
	[ +14.723507] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [97a6cce6fb0cadc69eaa3f57041275b19480874d82c483cf33ea155e298d38be] <==
	* {"level":"info","ts":"2023-12-05T20:57:25.214395Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.180:2380"}
	{"level":"info","ts":"2023-12-05T20:57:25.21292Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-05T20:57:25.221898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 switched to configuration voters=(728820823681708824)"}
	{"level":"info","ts":"2023-12-05T20:57:25.222236Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","added-peer-id":"a1d4aad7c74b318","added-peer-peer-urls":["https://192.168.72.180:2380"]}
	{"level":"info","ts":"2023-12-05T20:57:25.858975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-05T20:57:25.859036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-05T20:57:25.859053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgPreVoteResp from a1d4aad7c74b318 at term 1"}
	{"level":"info","ts":"2023-12-05T20:57:25.85912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became candidate at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.85913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 received MsgVoteResp from a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.859139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a1d4aad7c74b318 became leader at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.859146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a1d4aad7c74b318 elected leader a1d4aad7c74b318 at term 2"}
	{"level":"info","ts":"2023-12-05T20:57:25.86089Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a1d4aad7c74b318","local-member-attributes":"{Name:embed-certs-331495 ClientURLs:[https://192.168.72.180:2379]}","request-path":"/0/members/a1d4aad7c74b318/attributes","cluster-id":"1bb44bc72743d07d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-05T20:57:25.861146Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.861235Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:57:25.862478Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-05T20:57:25.862533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-05T20:57:25.861164Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-05T20:57:25.863421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-05T20:57:25.863571Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1bb44bc72743d07d","local-member-id":"a1d4aad7c74b318","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.863647Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.863672Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-05T20:57:25.864318Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.180:2379"}
	{"level":"info","ts":"2023-12-05T21:07:25.899157Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":674}
	{"level":"info","ts":"2023-12-05T21:07:25.902935Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":674,"took":"2.747063ms","hash":4150755352}
	{"level":"info","ts":"2023-12-05T21:07:25.903059Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4150755352,"revision":674,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  21:11:24 up 19 min,  0 users,  load average: 0.14, 0.38, 0.36
	Linux embed-certs-331495 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dbd0f55bdb24ed8eb48350bc9153a21ae53db7a333a39af57611b4a0bea469f8] <==
	* I1205 21:07:27.725475       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:07:28.725657       1 handler_proxy.go:93] no RequestInfo found in the context
	W1205 21:07:28.725705       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:07:28.725853       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:07:28.725868       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1205 21:07:28.725799       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:07:28.727181       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:08:27.571643       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:08:28.726752       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:08:28.726889       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:08:28.726920       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:08:28.728029       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:08:28.728158       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:08:28.728168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:09:27.573233       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 21:10:27.571969       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1205 21:10:28.727582       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:10:28.727622       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1205 21:10:28.727629       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:10:28.728930       1 handler_proxy.go:93] no RequestInfo found in the context
	E1205 21:10:28.729136       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:10:28.729180       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3eae6f73bd78e27c57668ba25268cb0d08a6b56e2b3abeab9912cdab79f154a3] <==
	* I1205 21:05:44.402404       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:13.922693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:14.412999       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:06:43.931152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:06:44.426242       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:07:13.937322       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:07:14.435908       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:07:43.944439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:07:44.444796       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:08:13.951830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:08:14.454448       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:08:43.958520       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:08:44.463668       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:08:58.344967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="246.487µs"
	I1205 21:09:12.342944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="158.512µs"
	E1205 21:09:13.965730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:09:14.474667       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:09:43.976568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:09:44.484550       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:10:13.983638       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:10:14.495460       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:10:43.989849       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:10:44.506933       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:11:13.995923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1205 21:11:14.516003       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [0ae4b48879d4a4de8d69f67136cbe4c2e4805c0b16c54e1adfb7ad065b932815] <==
	* I1205 20:57:47.876295       1 server_others.go:69] "Using iptables proxy"
	I1205 20:57:47.936340       1 node.go:141] Successfully retrieved node IP: 192.168.72.180
	I1205 20:57:48.301981       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1205 20:57:48.302401       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:57:48.417262       1 server_others.go:152] "Using iptables Proxier"
	I1205 20:57:48.420222       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1205 20:57:48.421161       1 server.go:846] "Version info" version="v1.28.4"
	I1205 20:57:48.421210       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:57:48.441326       1 config.go:188] "Starting service config controller"
	I1205 20:57:48.442563       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1205 20:57:48.442685       1 config.go:97] "Starting endpoint slice config controller"
	I1205 20:57:48.443547       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1205 20:57:48.448649       1 config.go:315] "Starting node config controller"
	I1205 20:57:48.448738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1205 20:57:48.545240       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1205 20:57:48.545335       1 shared_informer.go:318] Caches are synced for service config
	I1205 20:57:48.548818       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d780a6357dc09ecf66ea26abc07fb4b6815e65438871fe8a2211c500256ef66d] <==
	* W1205 20:57:27.742726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:57:27.745264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:27.745346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:27.745273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:57:27.742761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:27.745427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:57:28.609768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:28.609874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 20:57:28.658417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:57:28.658482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 20:57:28.736502       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:28.736583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1205 20:57:28.752802       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:57:28.752880       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 20:57:28.887957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:57:28.888293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 20:57:28.919446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:57:28.919540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 20:57:28.976009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:57:28.976301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1205 20:57:29.036560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:57:29.036693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1205 20:57:29.073587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:29.073707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1205 20:57:31.330568       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:52:15 UTC, ends at Tue 2023-12-05 21:11:25 UTC. --
	Dec 05 21:08:36 embed-certs-331495 kubelet[3857]: E1205 21:08:36.326885    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:08:47 embed-certs-331495 kubelet[3857]: E1205 21:08:47.346541    3857 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 21:08:47 embed-certs-331495 kubelet[3857]: E1205 21:08:47.346625    3857 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 21:08:47 embed-certs-331495 kubelet[3857]: E1205 21:08:47.346827    3857 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bhhtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-wv2t6_kube-system(4cd8c975-aaf4-4ae0-9e6a-f644978f4127): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:08:47 embed-certs-331495 kubelet[3857]: E1205 21:08:47.346868    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:08:58 embed-certs-331495 kubelet[3857]: E1205 21:08:58.326563    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:09:12 embed-certs-331495 kubelet[3857]: E1205 21:09:12.325654    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:09:26 embed-certs-331495 kubelet[3857]: E1205 21:09:26.326919    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:09:31 embed-certs-331495 kubelet[3857]: E1205 21:09:31.394723    3857 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:09:31 embed-certs-331495 kubelet[3857]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:09:31 embed-certs-331495 kubelet[3857]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:09:31 embed-certs-331495 kubelet[3857]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:09:38 embed-certs-331495 kubelet[3857]: E1205 21:09:38.325720    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:09:49 embed-certs-331495 kubelet[3857]: E1205 21:09:49.326465    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:10:01 embed-certs-331495 kubelet[3857]: E1205 21:10:01.327570    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:10:14 embed-certs-331495 kubelet[3857]: E1205 21:10:14.325898    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:10:27 embed-certs-331495 kubelet[3857]: E1205 21:10:27.326206    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:10:31 embed-certs-331495 kubelet[3857]: E1205 21:10:31.399007    3857 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 05 21:10:31 embed-certs-331495 kubelet[3857]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:10:31 embed-certs-331495 kubelet[3857]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:10:31 embed-certs-331495 kubelet[3857]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:10:41 embed-certs-331495 kubelet[3857]: E1205 21:10:41.327243    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:10:53 embed-certs-331495 kubelet[3857]: E1205 21:10:53.328270    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:11:06 embed-certs-331495 kubelet[3857]: E1205 21:11:06.327339    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	Dec 05 21:11:17 embed-certs-331495 kubelet[3857]: E1205 21:11:17.328871    3857 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-wv2t6" podUID="4cd8c975-aaf4-4ae0-9e6a-f644978f4127"
	
	* 
	* ==> storage-provisioner [182d80c604bcb2dba7af07fb7e94cb9021c8854e04c4ccdcc57f80478515a4fa] <==
	* I1205 20:57:48.591287       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:57:48.603958       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:57:48.604131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:57:48.616732       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:57:48.617676       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9659a339-991f-4132-8dee-e7c6e5a0d76f", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-331495_d0eec26a-529a-474e-919d-f854b3788ba9 became leader
	I1205 20:57:48.617863       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-331495_d0eec26a-529a-474e-919d-f854b3788ba9!
	I1205 20:57:48.719055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-331495_d0eec26a-529a-474e-919d-f854b3788ba9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-331495 -n embed-certs-331495
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-331495 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-wv2t6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-331495 describe pod metrics-server-57f55c9bc5-wv2t6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-331495 describe pod metrics-server-57f55c9bc5-wv2t6: exit status 1 (65.384192ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-wv2t6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-331495 describe pod metrics-server-57f55c9bc5-wv2t6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (272.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (210.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 21:07:37.059984   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 21:07:46.651938   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 21:10:16.960361   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061206 -n old-k8s-version-061206
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-12-05 21:11:03.492884505 +0000 UTC m=+5776.305344132
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-061206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-061206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.884µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-061206 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-061206 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-061206 logs -n 25: (1.670442901s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p pause-405510                                        | pause-405510                 | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:44 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-601680                              | stopped-upgrade-601680       | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:41 UTC |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:41 UTC | 05 Dec 23 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-331495            | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC | 05 Dec 23 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:43 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-061206        | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143651             | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC | 05 Dec 23 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:44 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-873953                              | cert-expiration-873953       | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255695 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:45 UTC |
	|         | disable-driver-mounts-255695                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:45 UTC | 05 Dec 23 20:46 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-331495                 | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-331495                                  | embed-certs-331495           | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-061206             | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-463614  | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-061206                              | old-k8s-version-061206       | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC | 05 Dec 23 20:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143651                  | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:46 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-143651                                   | no-preload-143651            | jenkins | v1.32.0 | 05 Dec 23 20:47 UTC | 05 Dec 23 20:57 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-463614       | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-463614 | jenkins | v1.32.0 | 05 Dec 23 20:49 UTC | 05 Dec 23 20:56 UTC |
	|         | default-k8s-diff-port-463614                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 20:49:16
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:49:16.268811   47365 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:49:16.269102   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269113   47365 out.go:309] Setting ErrFile to fd 2...
	I1205 20:49:16.269117   47365 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:49:16.269306   47365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:49:16.269873   47365 out.go:303] Setting JSON to false
	I1205 20:49:16.270847   47365 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5509,"bootTime":1701803847,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:49:16.270909   47365 start.go:138] virtualization: kvm guest
	I1205 20:49:16.273160   47365 out.go:177] * [default-k8s-diff-port-463614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:49:16.275265   47365 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:49:16.275288   47365 notify.go:220] Checking for updates...
	I1205 20:49:16.276797   47365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:49:16.278334   47365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:49:16.279902   47365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:49:16.281580   47365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:49:16.283168   47365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:49:16.285134   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:49:16.285533   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.285605   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.300209   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I1205 20:49:16.300585   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.301134   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.301159   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.301488   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.301644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.301873   47365 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:49:16.302164   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:49:16.302215   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:49:16.317130   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I1205 20:49:16.317591   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:49:16.318064   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:49:16.318086   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:49:16.318475   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:49:16.318691   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:49:16.356580   47365 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:49:16.358350   47365 start.go:298] selected driver: kvm2
	I1205 20:49:16.358368   47365 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.358501   47365 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:49:16.359194   47365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.359276   47365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:49:16.374505   47365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 20:49:16.374939   47365 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:49:16.374999   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:49:16.375009   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:49:16.375022   47365 start_flags.go:323] config:
	{Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-46361
4 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:49:16.375188   47365 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:49:16.377202   47365 out.go:177] * Starting control plane node default-k8s-diff-port-463614 in cluster default-k8s-diff-port-463614
	I1205 20:49:16.338499   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:19.410522   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:16.379191   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:49:16.379245   47365 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 20:49:16.379253   47365 cache.go:56] Caching tarball of preloaded images
	I1205 20:49:16.379352   47365 preload.go:174] Found /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:49:16.379364   47365 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1205 20:49:16.379500   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:49:16.379715   47365 start.go:365] acquiring machines lock for default-k8s-diff-port-463614: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:49:25.490576   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:28.562621   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:34.642596   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:37.714630   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:43.794573   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:46.866618   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:52.946521   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:49:56.018552   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:02.098566   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:05.170641   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:11.250570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:14.322507   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:20.402570   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:23.474581   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:29.554568   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:32.626541   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:38.706589   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:41.778594   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:47.858626   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:50.930560   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:50:57.010496   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:00.082587   46374 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.180:22: connect: no route to host
	I1205 20:51:03.086325   46700 start.go:369] acquired machines lock for "old-k8s-version-061206" in 4m14.42699626s
	I1205 20:51:03.086377   46700 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:03.086392   46700 fix.go:54] fixHost starting: 
	I1205 20:51:03.086799   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:03.086835   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:03.101342   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
	I1205 20:51:03.101867   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:03.102378   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:51:03.102403   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:03.102792   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:03.103003   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:03.103208   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:51:03.104894   46700 fix.go:102] recreateIfNeeded on old-k8s-version-061206: state=Stopped err=<nil>
	I1205 20:51:03.104914   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	W1205 20:51:03.105115   46700 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:03.106835   46700 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-061206" ...
	I1205 20:51:03.108621   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Start
	I1205 20:51:03.108840   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring networks are active...
	I1205 20:51:03.109627   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network default is active
	I1205 20:51:03.110007   46700 main.go:141] libmachine: (old-k8s-version-061206) Ensuring network mk-old-k8s-version-061206 is active
	I1205 20:51:03.110401   46700 main.go:141] libmachine: (old-k8s-version-061206) Getting domain xml...
	I1205 20:51:03.111358   46700 main.go:141] libmachine: (old-k8s-version-061206) Creating domain...
	I1205 20:51:03.084237   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:03.084288   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:51:03.086163   46374 machine.go:91] provisioned docker machine in 4m37.408875031s
	I1205 20:51:03.086199   46374 fix.go:56] fixHost completed within 4m37.430079633s
	I1205 20:51:03.086204   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 4m37.430101514s
	W1205 20:51:03.086231   46374 start.go:694] error starting host: provision: host is not running
	W1205 20:51:03.086344   46374 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1205 20:51:03.086356   46374 start.go:709] Will try again in 5 seconds ...
	I1205 20:51:04.367947   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting to get IP...
	I1205 20:51:04.368825   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.369277   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.369387   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.369246   47662 retry.go:31] will retry after 251.730796ms: waiting for machine to come up
	I1205 20:51:04.622984   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:04.623402   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:04.623431   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:04.623354   47662 retry.go:31] will retry after 383.862516ms: waiting for machine to come up
	I1205 20:51:05.008944   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.009308   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.009336   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.009237   47662 retry.go:31] will retry after 412.348365ms: waiting for machine to come up
	I1205 20:51:05.422846   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.423235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.423253   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.423198   47662 retry.go:31] will retry after 568.45875ms: waiting for machine to come up
	I1205 20:51:05.992882   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:05.993236   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:05.993264   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:05.993182   47662 retry.go:31] will retry after 494.410091ms: waiting for machine to come up
	I1205 20:51:06.488852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:06.489210   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:06.489235   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:06.489151   47662 retry.go:31] will retry after 640.351521ms: waiting for machine to come up
	I1205 20:51:07.130869   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:07.131329   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:07.131355   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:07.131273   47662 retry.go:31] will retry after 1.164209589s: waiting for machine to come up
	I1205 20:51:08.296903   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:08.297333   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:08.297365   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:08.297280   47662 retry.go:31] will retry after 1.479760715s: waiting for machine to come up
	I1205 20:51:08.087457   46374 start.go:365] acquiring machines lock for embed-certs-331495: {Name:mk08d5ef8108bf0ee52be7a73ac30851a06e7d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:51:09.778949   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:09.779414   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:09.779435   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:09.779379   47662 retry.go:31] will retry after 1.577524888s: waiting for machine to come up
	I1205 20:51:11.359094   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:11.359468   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:11.359499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:11.359405   47662 retry.go:31] will retry after 1.742003001s: waiting for machine to come up
	I1205 20:51:13.103927   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:13.104416   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:13.104446   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:13.104365   47662 retry.go:31] will retry after 2.671355884s: waiting for machine to come up
	I1205 20:51:15.777050   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:15.777542   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:15.777573   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:15.777491   47662 retry.go:31] will retry after 2.435682478s: waiting for machine to come up
	I1205 20:51:18.214485   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:18.214943   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | unable to find current IP address of domain old-k8s-version-061206 in network mk-old-k8s-version-061206
	I1205 20:51:18.214965   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | I1205 20:51:18.214920   47662 retry.go:31] will retry after 2.827460605s: waiting for machine to come up
	I1205 20:51:22.191314   46866 start.go:369] acquired machines lock for "no-preload-143651" in 4m16.377152417s
	I1205 20:51:22.191373   46866 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:22.191380   46866 fix.go:54] fixHost starting: 
	I1205 20:51:22.191764   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:22.191801   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:22.208492   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I1205 20:51:22.208882   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:22.209423   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:51:22.209448   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:22.209839   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:22.210041   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:22.210202   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:51:22.211737   46866 fix.go:102] recreateIfNeeded on no-preload-143651: state=Stopped err=<nil>
	I1205 20:51:22.211762   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	W1205 20:51:22.211960   46866 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:22.214319   46866 out.go:177] * Restarting existing kvm2 VM for "no-preload-143651" ...
	I1205 20:51:21.044392   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044931   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has current primary IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.044953   46700 main.go:141] libmachine: (old-k8s-version-061206) Found IP for machine: 192.168.50.116
	I1205 20:51:21.044964   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserving static IP address...
	I1205 20:51:21.045337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.045357   46700 main.go:141] libmachine: (old-k8s-version-061206) Reserved static IP address: 192.168.50.116
	I1205 20:51:21.045371   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | skip adding static IP to network mk-old-k8s-version-061206 - found existing host DHCP lease matching {name: "old-k8s-version-061206", mac: "52:54:00:f9:f7:bc", ip: "192.168.50.116"}
	I1205 20:51:21.045381   46700 main.go:141] libmachine: (old-k8s-version-061206) Waiting for SSH to be available...
	I1205 20:51:21.045398   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Getting to WaitForSSH function...
	I1205 20:51:21.047343   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047678   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.047719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.047758   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH client type: external
	I1205 20:51:21.047789   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa (-rw-------)
	I1205 20:51:21.047817   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:21.047832   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | About to run SSH command:
	I1205 20:51:21.047841   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | exit 0
	I1205 20:51:21.134741   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:21.135100   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetConfigRaw
	I1205 20:51:21.135770   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.138325   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138656   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.138689   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.138908   46700 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/config.json ...
	I1205 20:51:21.139128   46700 machine.go:88] provisioning docker machine ...
	I1205 20:51:21.139147   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.139351   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139516   46700 buildroot.go:166] provisioning hostname "old-k8s-version-061206"
	I1205 20:51:21.139534   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.139714   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.141792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142136   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.142163   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.142294   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.142471   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142609   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.142741   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.142868   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.143244   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.143264   46700 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-061206 && echo "old-k8s-version-061206" | sudo tee /etc/hostname
	I1205 20:51:21.267170   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-061206
	
	I1205 20:51:21.267193   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.270042   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270524   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.270556   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.270749   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.270945   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271115   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.271229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.271407   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.271735   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.271752   46700 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-061206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-061206/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-061206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:21.391935   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:21.391959   46700 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:21.391983   46700 buildroot.go:174] setting up certificates
	I1205 20:51:21.391994   46700 provision.go:83] configureAuth start
	I1205 20:51:21.392002   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetMachineName
	I1205 20:51:21.392264   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:21.395020   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395337   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.395375   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.395517   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.397499   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397760   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.397792   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.397937   46700 provision.go:138] copyHostCerts
	I1205 20:51:21.397994   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:21.398007   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:21.398090   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:21.398222   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:21.398234   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:21.398293   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:21.398383   46700 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:21.398394   46700 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:21.398432   46700 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:21.398499   46700 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-061206 san=[192.168.50.116 192.168.50.116 localhost 127.0.0.1 minikube old-k8s-version-061206]
	I1205 20:51:21.465637   46700 provision.go:172] copyRemoteCerts
	I1205 20:51:21.465701   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:21.465737   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.468386   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468688   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.468719   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.468896   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.469092   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.469232   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.469349   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:21.555915   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:21.578545   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 20:51:21.603058   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:21.624769   46700 provision.go:86] duration metric: configureAuth took 232.761874ms
	I1205 20:51:21.624798   46700 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:21.624972   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:51:21.625065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.627589   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.627953   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.627991   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.628085   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.628300   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628477   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.628643   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.628867   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:21.629237   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:21.629262   46700 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:21.945366   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:21.945398   46700 machine.go:91] provisioned docker machine in 806.257704ms
	I1205 20:51:21.945410   46700 start.go:300] post-start starting for "old-k8s-version-061206" (driver="kvm2")
	I1205 20:51:21.945423   46700 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:21.945442   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:21.945803   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:21.945833   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:21.948699   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949083   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:21.949116   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:21.949247   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:21.949455   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:21.949642   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:21.949780   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.036694   46700 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:22.040857   46700 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:22.040887   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:22.040961   46700 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:22.041067   46700 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:22.041167   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:22.050610   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:22.072598   46700 start.go:303] post-start completed in 127.17514ms
	I1205 20:51:22.072621   46700 fix.go:56] fixHost completed within 18.986227859s
	I1205 20:51:22.072650   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.075382   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.075779   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.075809   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.076014   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.076218   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076390   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.076548   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.076677   46700 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:22.076979   46700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.116 22 <nil> <nil>}
	I1205 20:51:22.076989   46700 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:22.191127   46700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809482.140720971
	
	I1205 20:51:22.191150   46700 fix.go:206] guest clock: 1701809482.140720971
	I1205 20:51:22.191160   46700 fix.go:219] Guest: 2023-12-05 20:51:22.140720971 +0000 UTC Remote: 2023-12-05 20:51:22.072625275 +0000 UTC m=+273.566123117 (delta=68.095696ms)
	I1205 20:51:22.191206   46700 fix.go:190] guest clock delta is within tolerance: 68.095696ms
	I1205 20:51:22.191211   46700 start.go:83] releasing machines lock for "old-k8s-version-061206", held for 19.104851926s
	I1205 20:51:22.191239   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.191530   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:22.194285   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194676   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.194721   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.194832   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195352   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195535   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:51:22.195614   46700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:22.195660   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.195729   46700 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:22.195759   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:51:22.198085   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198438   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198493   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198522   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198619   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.198813   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.198893   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:22.198922   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:22.198980   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199065   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:51:22.199139   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.199172   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:51:22.199274   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:51:22.199426   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:51:22.284598   46700 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:22.304917   46700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:22.454449   46700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:22.461344   46700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:22.461409   46700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:22.483106   46700 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:22.483130   46700 start.go:475] detecting cgroup driver to use...
	I1205 20:51:22.483202   46700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:22.498157   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:22.510661   46700 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:22.510712   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:22.525004   46700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:22.538499   46700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:22.652874   46700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:22.787215   46700 docker.go:219] disabling docker service ...
	I1205 20:51:22.787272   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:22.800315   46700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:22.812031   46700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:22.926202   46700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:23.057043   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:23.072205   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:23.092858   46700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1205 20:51:23.092916   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.103613   46700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:23.103680   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.113992   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.124132   46700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:23.134007   46700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:23.144404   46700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:23.153679   46700 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:23.153735   46700 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:23.167935   46700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:23.178944   46700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:23.294314   46700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:23.469887   46700 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:23.469957   46700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:23.475308   46700 start.go:543] Will wait 60s for crictl version
	I1205 20:51:23.475384   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:23.479436   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:23.520140   46700 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:23.520223   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.572184   46700 ssh_runner.go:195] Run: crio --version
	I1205 20:51:23.619296   46700 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I1205 20:51:22.215866   46866 main.go:141] libmachine: (no-preload-143651) Calling .Start
	I1205 20:51:22.216026   46866 main.go:141] libmachine: (no-preload-143651) Ensuring networks are active...
	I1205 20:51:22.216719   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network default is active
	I1205 20:51:22.217060   46866 main.go:141] libmachine: (no-preload-143651) Ensuring network mk-no-preload-143651 is active
	I1205 20:51:22.217553   46866 main.go:141] libmachine: (no-preload-143651) Getting domain xml...
	I1205 20:51:22.218160   46866 main.go:141] libmachine: (no-preload-143651) Creating domain...
	I1205 20:51:23.560327   46866 main.go:141] libmachine: (no-preload-143651) Waiting to get IP...
	I1205 20:51:23.561191   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.561601   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.561675   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.561566   47785 retry.go:31] will retry after 269.644015ms: waiting for machine to come up
	I1205 20:51:23.833089   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:23.833656   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:23.833695   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:23.833612   47785 retry.go:31] will retry after 363.018928ms: waiting for machine to come up
	I1205 20:51:24.198250   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.198767   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.198797   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.198717   47785 retry.go:31] will retry after 464.135158ms: waiting for machine to come up
	I1205 20:51:24.664518   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:24.664945   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:24.664970   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:24.664902   47785 retry.go:31] will retry after 383.704385ms: waiting for machine to come up
	I1205 20:51:25.050654   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.051112   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.051142   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.051078   47785 retry.go:31] will retry after 620.614799ms: waiting for machine to come up
	I1205 20:51:25.672997   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:25.673452   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:25.673485   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:25.673394   47785 retry.go:31] will retry after 594.447783ms: waiting for machine to come up
	I1205 20:51:23.620743   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetIP
	I1205 20:51:23.623372   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623672   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:51:23.623702   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:51:23.623934   46700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:23.628382   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:23.642698   46700 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 20:51:23.642770   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:23.686679   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:23.686776   46700 ssh_runner.go:195] Run: which lz4
	I1205 20:51:23.690994   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:51:23.695445   46700 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:51:23.695480   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1205 20:51:25.519961   46700 crio.go:444] Took 1.828999 seconds to copy over tarball
	I1205 20:51:25.520052   46700 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:51:28.545261   46700 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.025151809s)
	I1205 20:51:28.545291   46700 crio.go:451] Took 3.025302 seconds to extract the tarball
	I1205 20:51:28.545303   46700 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:51:26.269269   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:26.269771   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:26.269815   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:26.269741   47785 retry.go:31] will retry after 872.968768ms: waiting for machine to come up
	I1205 20:51:27.144028   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:27.144505   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:27.144538   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:27.144467   47785 retry.go:31] will retry after 1.067988446s: waiting for machine to come up
	I1205 20:51:28.213709   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:28.214161   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:28.214184   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:28.214111   47785 retry.go:31] will retry after 1.483033238s: waiting for machine to come up
	I1205 20:51:29.699402   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:29.699928   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:29.699973   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:29.699861   47785 retry.go:31] will retry after 1.985034944s: waiting for machine to come up
	I1205 20:51:28.586059   46700 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:28.631610   46700 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1205 20:51:28.631643   46700 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:28.631749   46700 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.631797   46700 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.631754   46700 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.631937   46700 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.632007   46700 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1205 20:51:28.631930   46700 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.632029   46700 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.631760   46700 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633385   46700 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.633397   46700 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1205 20:51:28.633416   46700 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.633494   46700 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.633496   46700 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.633512   46700 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.633518   46700 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.633497   46700 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.789873   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.811118   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.811610   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.818440   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.818470   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1205 20:51:28.820473   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.849060   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.855915   46700 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1205 20:51:28.855966   46700 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.856023   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953211   46700 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1205 20:51:28.953261   46700 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.953289   46700 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1205 20:51:28.953315   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.953325   46700 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:28.953363   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.968680   46700 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:28.992735   46700 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1205 20:51:28.992781   46700 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1205 20:51:28.992825   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992847   46700 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1205 20:51:28.992878   46700 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1205 20:51:28.992907   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992917   46700 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1205 20:51:28.992830   46700 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1205 20:51:28.992948   46700 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:28.992980   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1205 20:51:28.992994   46700 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:28.993009   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.993029   46700 ssh_runner.go:195] Run: which crictl
	I1205 20:51:28.992944   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1205 20:51:28.993064   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1205 20:51:29.193946   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1205 20:51:29.194040   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1205 20:51:29.194095   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1205 20:51:29.194188   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1205 20:51:29.194217   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1205 20:51:29.194257   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1205 20:51:29.194279   46700 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1205 20:51:29.299767   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1205 20:51:29.299772   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1205 20:51:29.299836   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1205 20:51:29.299855   46700 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1205 20:51:29.299870   46700 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.304934   46700 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I1205 20:51:29.304952   46700 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I1205 20:51:29.305004   46700 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I1205 20:51:31.467263   46700 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (2.162226207s)
	I1205 20:51:31.467295   46700 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1205 20:51:31.467342   46700 cache_images.go:92] LoadImages completed in 2.835682781s
	W1205 20:51:31.467425   46700 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I1205 20:51:31.467515   46700 ssh_runner.go:195] Run: crio config
	I1205 20:51:31.527943   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:31.527968   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:31.527989   46700 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:51:31.528016   46700 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.116 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-061206 NodeName:old-k8s-version-061206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 20:51:31.528162   46700 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-061206"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-061206
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.116:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:51:31.528265   46700 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-061206 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:51:31.528332   46700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1205 20:51:31.538013   46700 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:51:31.538090   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:51:31.547209   46700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 20:51:31.565720   46700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:51:31.582290   46700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I1205 20:51:31.599081   46700 ssh_runner.go:195] Run: grep 192.168.50.116	control-plane.minikube.internal$ /etc/hosts
	I1205 20:51:31.603007   46700 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:31.615348   46700 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206 for IP: 192.168.50.116
	I1205 20:51:31.615385   46700 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:51:31.615582   46700 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:51:31.615657   46700 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:51:31.615757   46700 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.key
	I1205 20:51:31.615846   46700 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key.ae4cb88a
	I1205 20:51:31.615902   46700 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key
	I1205 20:51:31.616079   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:51:31.616150   46700 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:51:31.616172   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:51:31.616216   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:51:31.616261   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:51:31.616302   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:51:31.616375   46700 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:31.617289   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:51:31.645485   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:51:31.675015   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:51:31.699520   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:51:31.727871   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:51:31.751623   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:51:31.776679   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:51:31.799577   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:51:31.827218   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:51:31.849104   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:51:31.870931   46700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:51:31.894940   46700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:51:31.912233   46700 ssh_runner.go:195] Run: openssl version
	I1205 20:51:31.918141   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:51:31.928422   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932915   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.932985   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:51:31.938327   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:51:31.948580   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:51:31.958710   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963091   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.963155   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:51:31.968667   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:51:31.981987   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:51:31.995793   46700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001622   46700 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.001709   46700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:51:32.008883   46700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:51:32.021378   46700 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:51:32.025902   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:51:32.031917   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:51:32.037649   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:51:32.043121   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:51:32.048806   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:51:32.054266   46700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:51:32.060014   46700 kubeadm.go:404] StartCluster: {Name:old-k8s-version-061206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-061206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:51:32.060131   46700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:51:32.060186   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:32.101244   46700 cri.go:89] found id: ""
	I1205 20:51:32.101317   46700 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:51:32.111900   46700 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:51:32.111925   46700 kubeadm.go:636] restartCluster start
	I1205 20:51:32.111989   46700 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:51:32.121046   46700 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.122654   46700 kubeconfig.go:92] found "old-k8s-version-061206" server: "https://192.168.50.116:8443"
	I1205 20:51:32.126231   46700 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:51:32.135341   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.135404   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.147308   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.147325   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.147367   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.158453   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:32.659254   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:32.659357   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:32.672490   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:33.159599   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.159693   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.171948   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:31.688072   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:31.688591   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:31.688627   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:31.688516   47785 retry.go:31] will retry after 1.83172898s: waiting for machine to come up
	I1205 20:51:33.521647   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:33.522137   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:33.522167   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:33.522083   47785 retry.go:31] will retry after 3.41334501s: waiting for machine to come up
	I1205 20:51:33.659273   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:33.659359   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:33.675427   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.158981   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.159075   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.173025   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:34.659439   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:34.659547   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:34.672184   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.159408   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.159472   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.173149   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:35.659490   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:35.659626   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:35.673261   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.159480   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.159569   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.172185   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.659417   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:36.659528   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:36.675853   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.159404   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.159495   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.172824   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:37.659361   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:37.659456   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:37.671599   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:38.158754   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.158834   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.171170   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:36.939441   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:36.939880   46866 main.go:141] libmachine: (no-preload-143651) DBG | unable to find current IP address of domain no-preload-143651 in network mk-no-preload-143651
	I1205 20:51:36.939905   46866 main.go:141] libmachine: (no-preload-143651) DBG | I1205 20:51:36.939843   47785 retry.go:31] will retry after 3.715659301s: waiting for machine to come up
	I1205 20:51:40.659432   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659901   46866 main.go:141] libmachine: (no-preload-143651) Found IP for machine: 192.168.61.162
	I1205 20:51:40.659937   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has current primary IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.659973   46866 main.go:141] libmachine: (no-preload-143651) Reserving static IP address...
	I1205 20:51:40.660324   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.660352   46866 main.go:141] libmachine: (no-preload-143651) Reserved static IP address: 192.168.61.162
	I1205 20:51:40.660372   46866 main.go:141] libmachine: (no-preload-143651) DBG | skip adding static IP to network mk-no-preload-143651 - found existing host DHCP lease matching {name: "no-preload-143651", mac: "52:54:00:2e:09:28", ip: "192.168.61.162"}
	I1205 20:51:40.660391   46866 main.go:141] libmachine: (no-preload-143651) DBG | Getting to WaitForSSH function...
	I1205 20:51:40.660407   46866 main.go:141] libmachine: (no-preload-143651) Waiting for SSH to be available...
	I1205 20:51:40.662619   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663014   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.663042   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.663226   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH client type: external
	I1205 20:51:40.663257   46866 main.go:141] libmachine: (no-preload-143651) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa (-rw-------)
	I1205 20:51:40.663293   46866 main.go:141] libmachine: (no-preload-143651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:51:40.663312   46866 main.go:141] libmachine: (no-preload-143651) DBG | About to run SSH command:
	I1205 20:51:40.663328   46866 main.go:141] libmachine: (no-preload-143651) DBG | exit 0
	I1205 20:51:41.891099   47365 start.go:369] acquired machines lock for "default-k8s-diff-port-463614" in 2m25.511348838s
	I1205 20:51:41.891167   47365 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:51:41.891179   47365 fix.go:54] fixHost starting: 
	I1205 20:51:41.891625   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:51:41.891666   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:51:41.910556   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1205 20:51:41.910956   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:51:41.911447   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:51:41.911474   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:51:41.911792   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:51:41.912020   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:51:41.912168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:51:41.913796   47365 fix.go:102] recreateIfNeeded on default-k8s-diff-port-463614: state=Stopped err=<nil>
	I1205 20:51:41.913824   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	W1205 20:51:41.914032   47365 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:51:41.916597   47365 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-463614" ...
	I1205 20:51:40.754683   46866 main.go:141] libmachine: (no-preload-143651) DBG | SSH cmd err, output: <nil>: 
	I1205 20:51:40.755055   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetConfigRaw
	I1205 20:51:40.755663   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:40.758165   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758502   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.758534   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.758722   46866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/config.json ...
	I1205 20:51:40.758916   46866 machine.go:88] provisioning docker machine ...
	I1205 20:51:40.758933   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:40.759160   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759358   46866 buildroot.go:166] provisioning hostname "no-preload-143651"
	I1205 20:51:40.759384   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:40.759555   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.762125   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762513   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.762546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.762688   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.762894   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763070   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.763211   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.763392   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.763747   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.763761   46866 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-143651 && echo "no-preload-143651" | sudo tee /etc/hostname
	I1205 20:51:40.895095   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-143651
	
	I1205 20:51:40.895123   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:40.897864   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898199   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:40.898236   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:40.898419   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:40.898629   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898814   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:40.898972   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:40.899147   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:40.899454   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:40.899472   46866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-143651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-143651/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-143651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:51:41.027721   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:51:41.027758   46866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:51:41.027802   46866 buildroot.go:174] setting up certificates
	I1205 20:51:41.027813   46866 provision.go:83] configureAuth start
	I1205 20:51:41.027827   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetMachineName
	I1205 20:51:41.028120   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.031205   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031561   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.031592   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.031715   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.034163   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034531   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.034563   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.034697   46866 provision.go:138] copyHostCerts
	I1205 20:51:41.034750   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:51:41.034767   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:51:41.034826   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:51:41.034918   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:51:41.034925   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:51:41.034947   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:51:41.035018   46866 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:51:41.035029   46866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:51:41.035056   46866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:51:41.035129   46866 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.no-preload-143651 san=[192.168.61.162 192.168.61.162 localhost 127.0.0.1 minikube no-preload-143651]
	I1205 20:51:41.152743   46866 provision.go:172] copyRemoteCerts
	I1205 20:51:41.152808   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:51:41.152836   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.155830   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156153   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.156181   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.156380   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.156587   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.156769   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.156914   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.247182   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 20:51:41.271756   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:51:41.296485   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:51:41.317870   46866 provision.go:86] duration metric: configureAuth took 290.041804ms
	I1205 20:51:41.317900   46866 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:51:41.318059   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:51:41.318130   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.320631   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.320907   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.320935   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.321099   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.321310   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321436   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.321558   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.321671   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.321981   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.321998   46866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:51:41.637500   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:51:41.637536   46866 machine.go:91] provisioned docker machine in 878.607379ms
	I1205 20:51:41.637551   46866 start.go:300] post-start starting for "no-preload-143651" (driver="kvm2")
	I1205 20:51:41.637565   46866 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:51:41.637586   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.637928   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:51:41.637959   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.640546   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.640941   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.640969   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.641158   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.641348   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.641521   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.641701   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.733255   46866 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:51:41.737558   46866 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:51:41.737582   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:51:41.737656   46866 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:51:41.737747   46866 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:51:41.737867   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:51:41.747400   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:51:41.769318   46866 start.go:303] post-start completed in 131.753103ms
	I1205 20:51:41.769341   46866 fix.go:56] fixHost completed within 19.577961747s
	I1205 20:51:41.769360   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.772098   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772433   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.772469   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.772614   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.772830   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773000   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.773141   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.773329   46866 main.go:141] libmachine: Using SSH client type: native
	I1205 20:51:41.773689   46866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.61.162 22 <nil> <nil>}
	I1205 20:51:41.773701   46866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:51:41.890932   46866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809501.865042950
	
	I1205 20:51:41.890965   46866 fix.go:206] guest clock: 1701809501.865042950
	I1205 20:51:41.890977   46866 fix.go:219] Guest: 2023-12-05 20:51:41.86504295 +0000 UTC Remote: 2023-12-05 20:51:41.769344785 +0000 UTC m=+276.111345943 (delta=95.698165ms)
	I1205 20:51:41.891000   46866 fix.go:190] guest clock delta is within tolerance: 95.698165ms
	I1205 20:51:41.891005   46866 start.go:83] releasing machines lock for "no-preload-143651", held for 19.699651094s
	I1205 20:51:41.891037   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.891349   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:41.893760   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894151   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.894188   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.894393   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.894953   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895147   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:51:41.895233   46866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:51:41.895275   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.895379   46866 ssh_runner.go:195] Run: cat /version.json
	I1205 20:51:41.895409   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:51:41.897961   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898107   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898353   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898396   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898610   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898663   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:41.898693   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:41.898781   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.898835   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.898979   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:51:41.899138   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.899149   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:51:41.899296   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:51:41.987662   46866 ssh_runner.go:195] Run: systemctl --version
	I1205 20:51:42.008983   46866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:51:42.150028   46866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:51:42.156643   46866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:51:42.156719   46866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:51:42.175508   46866 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:51:42.175534   46866 start.go:475] detecting cgroup driver to use...
	I1205 20:51:42.175620   46866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:51:42.189808   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:51:42.202280   46866 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:51:42.202342   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:51:42.220906   46866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:51:42.238796   46866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:51:42.364162   46866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:51:42.493990   46866 docker.go:219] disabling docker service ...
	I1205 20:51:42.494066   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:51:42.507419   46866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:51:42.519769   46866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:51:42.639608   46866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:51:42.764015   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:51:42.776984   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:51:42.797245   46866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:51:42.797307   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.807067   46866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:51:42.807150   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.816699   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.825896   46866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:51:42.835144   46866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:51:42.844910   46866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:51:42.853054   46866 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:51:42.853127   46866 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:51:42.865162   46866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:51:42.874929   46866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:51:42.989397   46866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:51:43.173537   46866 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:51:43.173613   46866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:51:43.179392   46866 start.go:543] Will wait 60s for crictl version
	I1205 20:51:43.179449   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.183693   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:51:43.233790   46866 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:51:43.233862   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.291711   46866 ssh_runner.go:195] Run: crio --version
	I1205 20:51:43.343431   46866 out.go:177] * Preparing Kubernetes v1.29.0-rc.1 on CRI-O 1.24.1 ...
	I1205 20:51:38.658807   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:38.658875   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:38.672580   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.159258   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.159363   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.172800   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:39.659451   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:39.659544   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:39.673718   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.159346   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.159436   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.172524   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:40.659093   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:40.659170   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:40.671848   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.159453   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.159534   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.171845   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:41.659456   46700 api_server.go:166] Checking apiserver status ...
	I1205 20:51:41.659520   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:51:41.671136   46700 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:51:42.136008   46700 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:51:42.136039   46700 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:51:42.136049   46700 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:51:42.136130   46700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:51:42.183279   46700 cri.go:89] found id: ""
	I1205 20:51:42.183375   46700 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:51:42.202550   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:51:42.213978   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:51:42.214041   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223907   46700 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:51:42.223932   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:42.349280   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.257422   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.483371   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.345205   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetIP
	I1205 20:51:43.348398   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348738   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:51:43.348769   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:51:43.348965   46866 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 20:51:43.354536   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:51:43.368512   46866 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 20:51:43.368550   46866 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:51:43.411924   46866 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.1". assuming images are not preloaded.
	I1205 20:51:43.411956   46866 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.1 registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 registry.k8s.io/kube-scheduler:v1.29.0-rc.1 registry.k8s.io/kube-proxy:v1.29.0-rc.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 20:51:43.412050   46866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.412030   46866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.412084   46866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.412097   46866 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1205 20:51:43.412134   46866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.412072   46866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.412021   46866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.412056   46866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413334   46866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.413403   46866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.413481   46866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.413539   46866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.413554   46866 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1205 20:51:43.413337   46866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.413624   46866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.413405   46866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.563942   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.565063   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.567071   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.572782   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.577279   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.579820   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I1205 20:51:43.591043   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.735723   46866 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.735988   46866 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I1205 20:51:43.736032   46866 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.736073   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.791375   46866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" does not exist at hash "b7918a1eaa37ee8f677a10278009aa059854630785fbb7306140641494aeec09" in container runtime
	I1205 20:51:43.791424   46866 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.791473   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.810236   46866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" does not exist at hash "b953575c86991f5317a5f4b7e3f43c8a43d771ca400d96ba473f43c9a1ab3542" in container runtime
	I1205 20:51:43.810290   46866 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.810339   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841046   46866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" does not exist at hash "5c4a644510d6f168869cf0620dc09abd02a6fe054538f92a12c7fd7365ffb956" in container runtime
	I1205 20:51:43.841255   46866 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.841347   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.841121   46866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I1205 20:51:43.841565   46866 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.841635   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866289   46866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.1" does not exist at hash "86e0ea23640eb90027102f4e4ff188211c290910bcd9af7402fd429d43d281ff" in container runtime
	I1205 20:51:43.866344   46866 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.866368   46866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 20:51:43.866390   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866417   46866 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.866465   46866 ssh_runner.go:195] Run: which crictl
	I1205 20:51:43.866469   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I1205 20:51:43.866597   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1205 20:51:43.866685   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1205 20:51:43.866780   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I1205 20:51:43.866853   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1205 20:51:43.994581   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994691   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:43.994757   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I1205 20:51:43.994711   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1205 20:51:43.994792   46866 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:51:43.994849   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:44.000411   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.000501   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:44.008960   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I1205 20:51:44.009001   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:44.009071   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:44.073217   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073238   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073275   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I1205 20:51:44.073282   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1
	I1205 20:51:44.073304   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073376   46866 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 20:51:44.073397   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:44.073439   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1 (exists)
	I1205 20:51:44.073444   46866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:44.073471   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I1205 20:51:44.073504   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1 (exists)
	I1205 20:51:41.918223   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Start
	I1205 20:51:41.918414   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring networks are active...
	I1205 20:51:41.919085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network default is active
	I1205 20:51:41.919401   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Ensuring network mk-default-k8s-diff-port-463614 is active
	I1205 20:51:41.919733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Getting domain xml...
	I1205 20:51:41.920368   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Creating domain...
	I1205 20:51:43.304717   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting to get IP...
	I1205 20:51:43.305837   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.306294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.306202   47900 retry.go:31] will retry after 208.55347ms: waiting for machine to come up
	I1205 20:51:43.516782   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517269   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.517297   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.517232   47900 retry.go:31] will retry after 370.217439ms: waiting for machine to come up
	I1205 20:51:43.889085   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889580   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:43.889615   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:43.889531   47900 retry.go:31] will retry after 395.420735ms: waiting for machine to come up
	I1205 20:51:44.286007   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.286563   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.286481   47900 retry.go:31] will retry after 437.496548ms: waiting for machine to come up
	I1205 20:51:44.726145   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726803   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:44.726850   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:44.726748   47900 retry.go:31] will retry after 628.791518ms: waiting for machine to come up
	I1205 20:51:45.357823   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358285   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:45.358310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:45.358232   47900 retry.go:31] will retry after 661.164562ms: waiting for machine to come up
	I1205 20:51:46.021711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022151   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:46.022177   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:46.022120   47900 retry.go:31] will retry after 1.093521736s: waiting for machine to come up
	I1205 20:51:43.607841   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:43.765000   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:51:43.765097   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:43.776916   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.306400   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:44.805894   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.305832   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:51:45.332834   46700 api_server.go:72] duration metric: took 1.567832932s to wait for apiserver process to appear ...
	I1205 20:51:45.332867   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:51:45.332884   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:46.537183   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.1: (2.463870183s)
	I1205 20:51:46.537256   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 from cache
	I1205 20:51:46.537311   46866 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:46.537336   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.46384231s)
	I1205 20:51:46.537260   46866 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (2.463842778s)
	I1205 20:51:46.537373   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 20:51:46.537394   46866 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1 (exists)
	I1205 20:51:46.537411   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I1205 20:51:50.326248   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.788789868s)
	I1205 20:51:50.326299   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I1205 20:51:50.326337   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:50.326419   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1
	I1205 20:51:47.117386   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117831   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:47.117861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:47.117800   47900 retry.go:31] will retry after 1.255113027s: waiting for machine to come up
	I1205 20:51:48.375199   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375692   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:48.375733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:48.375655   47900 retry.go:31] will retry after 1.65255216s: waiting for machine to come up
	I1205 20:51:50.029505   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029904   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:50.029933   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:50.029860   47900 retry.go:31] will retry after 2.072960988s: waiting for machine to come up
	I1205 20:51:50.334417   46700 api_server.go:269] stopped: https://192.168.50.116:8443/healthz: Get "https://192.168.50.116:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 20:51:50.334459   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.286979   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:51:52.287013   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:51:52.787498   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:52.871766   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:52.871803   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.287974   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.301921   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1205 20:51:53.301962   46700 api_server.go:103] status: https://192.168.50.116:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1205 20:51:53.787781   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:51:53.799426   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:51:53.809064   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:51:53.809101   46700 api_server.go:131] duration metric: took 8.476226007s to wait for apiserver health ...
	I1205 20:51:53.809112   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:51:53.809120   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:51:53.811188   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:51:53.496825   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.1: (3.170377466s)
	I1205 20:51:53.496856   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 from cache
	I1205 20:51:53.496877   46866 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:53.496925   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I1205 20:51:55.657835   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.160865472s)
	I1205 20:51:55.657869   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I1205 20:51:55.657898   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:55.657955   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1
	I1205 20:51:52.104758   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105274   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:52.105301   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:52.105232   47900 retry.go:31] will retry after 2.172151449s: waiting for machine to come up
	I1205 20:51:54.279576   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280091   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:54.280119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:54.280054   47900 retry.go:31] will retry after 3.042324499s: waiting for machine to come up
	I1205 20:51:53.812841   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:51:53.835912   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:51:53.920892   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:51:53.943982   46700 system_pods.go:59] 7 kube-system pods found
	I1205 20:51:53.944026   46700 system_pods.go:61] "coredns-5644d7b6d9-kqhgk" [473e53e3-a0bd-4dcb-88c1-d61e9cc3e686] Running
	I1205 20:51:53.944034   46700 system_pods.go:61] "etcd-old-k8s-version-061206" [a2a6a459-41a3-49e3-b32e-a091317390ea] Running
	I1205 20:51:53.944041   46700 system_pods.go:61] "kube-apiserver-old-k8s-version-061206" [9cf24995-fccb-47e4-8d4a-870198b7c82f] Running
	I1205 20:51:53.944054   46700 system_pods.go:61] "kube-controller-manager-old-k8s-version-061206" [225a4a8b-2b6e-46f4-8bd9-9a375b05c23c] Pending
	I1205 20:51:53.944061   46700 system_pods.go:61] "kube-proxy-r5n6g" [5db8876d-ecff-40b3-a61d-aeaf7870166c] Running
	I1205 20:51:53.944068   46700 system_pods.go:61] "kube-scheduler-old-k8s-version-061206" [de56d925-45b3-4c36-b2c2-c90938793aa2] Running
	I1205 20:51:53.944075   46700 system_pods.go:61] "storage-provisioner" [d5d57d93-f94b-4a3e-8c65-25cd4d71b9d5] Running
	I1205 20:51:53.944083   46700 system_pods.go:74] duration metric: took 23.165628ms to wait for pod list to return data ...
	I1205 20:51:53.944093   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:51:53.956907   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:51:53.956949   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:51:53.956964   46700 node_conditions.go:105] duration metric: took 12.864098ms to run NodePressure ...
	I1205 20:51:53.956986   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:51:54.482145   46700 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:51:54.492629   46700 retry.go:31] will retry after 326.419845ms: kubelet not initialised
	I1205 20:51:54.826701   46700 retry.go:31] will retry after 396.475289ms: kubelet not initialised
	I1205 20:51:55.228971   46700 retry.go:31] will retry after 752.153604ms: kubelet not initialised
	I1205 20:51:55.987713   46700 retry.go:31] will retry after 881.822561ms: kubelet not initialised
	I1205 20:51:56.877407   46700 retry.go:31] will retry after 824.757816ms: kubelet not initialised
	I1205 20:51:57.707927   46700 retry.go:31] will retry after 2.392241385s: kubelet not initialised
	I1205 20:51:58.643374   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.1: (2.985387711s)
	I1205 20:51:58.643408   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 from cache
	I1205 20:51:58.643434   46866 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:58.643500   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 20:51:59.407245   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 20:51:59.407282   46866 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:59.407333   46866 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1
	I1205 20:51:57.324016   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324534   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | unable to find current IP address of domain default-k8s-diff-port-463614 in network mk-default-k8s-diff-port-463614
	I1205 20:51:57.324565   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | I1205 20:51:57.324482   47900 retry.go:31] will retry after 3.449667479s: waiting for machine to come up
	I1205 20:52:00.776644   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777141   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Found IP for machine: 192.168.39.27
	I1205 20:52:00.777175   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.777186   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserving static IP address...
	I1205 20:52:00.777825   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Reserved static IP address: 192.168.39.27
	I1205 20:52:00.777878   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.777892   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Waiting for SSH to be available...
	I1205 20:52:00.777918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | skip adding static IP to network mk-default-k8s-diff-port-463614 - found existing host DHCP lease matching {name: "default-k8s-diff-port-463614", mac: "52:54:00:98:7f:07", ip: "192.168.39.27"}
	I1205 20:52:00.777929   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Getting to WaitForSSH function...
	I1205 20:52:00.780317   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.780729   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.780870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH client type: external
	I1205 20:52:00.780909   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa (-rw-------)
	I1205 20:52:00.780940   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:00.780959   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | About to run SSH command:
	I1205 20:52:00.780980   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | exit 0
	I1205 20:52:00.922857   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:00.923204   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetConfigRaw
	I1205 20:52:00.923973   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:00.927405   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.927885   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.927918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.928217   47365 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/config.json ...
	I1205 20:52:00.928469   47365 machine.go:88] provisioning docker machine ...
	I1205 20:52:00.928497   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:00.928735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.928912   47365 buildroot.go:166] provisioning hostname "default-k8s-diff-port-463614"
	I1205 20:52:00.928938   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:00.929092   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:00.931664   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932096   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:00.932130   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:00.932310   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:00.932496   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932672   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:00.932822   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:00.932990   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:00.933401   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:00.933420   47365 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-463614 && echo "default-k8s-diff-port-463614" | sudo tee /etc/hostname
	I1205 20:52:01.078295   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-463614
	
	I1205 20:52:01.078332   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.081604   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082051   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.082079   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.082240   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.082492   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082686   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.082861   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.083034   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.083506   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.083535   47365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-463614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-463614/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-463614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:01.215856   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:01.215884   47365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:01.215912   47365 buildroot.go:174] setting up certificates
	I1205 20:52:01.215927   47365 provision.go:83] configureAuth start
	I1205 20:52:01.215947   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetMachineName
	I1205 20:52:01.216246   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:01.219169   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219465   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.219503   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.219653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.221768   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222137   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.222171   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.222410   47365 provision.go:138] copyHostCerts
	I1205 20:52:01.222493   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:01.222508   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:01.222568   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:01.222686   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:01.222717   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:01.222757   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:01.222825   47365 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:01.222832   47365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:01.222856   47365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:01.222921   47365 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-463614 san=[192.168.39.27 192.168.39.27 localhost 127.0.0.1 minikube default-k8s-diff-port-463614]
	I1205 20:52:02.247282   46374 start.go:369] acquired machines lock for "embed-certs-331495" in 54.15977635s
	I1205 20:52:02.247348   46374 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:52:02.247360   46374 fix.go:54] fixHost starting: 
	I1205 20:52:02.247794   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:02.247830   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:02.265529   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I1205 20:52:02.265970   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:02.266457   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:52:02.266484   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:02.266825   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:02.267016   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:02.267185   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:52:02.268838   46374 fix.go:102] recreateIfNeeded on embed-certs-331495: state=Stopped err=<nil>
	I1205 20:52:02.268859   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	W1205 20:52:02.269010   46374 fix.go:128] unexpected machine state, will restart: <nil>
	I1205 20:52:02.270658   46374 out.go:177] * Restarting existing kvm2 VM for "embed-certs-331495" ...
	I1205 20:52:00.114757   46700 retry.go:31] will retry after 2.136164682s: kubelet not initialised
	I1205 20:52:02.258242   46700 retry.go:31] will retry after 4.673214987s: kubelet not initialised
	I1205 20:52:01.474739   47365 provision.go:172] copyRemoteCerts
	I1205 20:52:01.474804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:01.474834   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.477249   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477632   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.477659   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.477908   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.478119   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.478313   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.478463   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:01.569617   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:01.594120   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1205 20:52:01.618066   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 20:52:01.643143   47365 provision.go:86] duration metric: configureAuth took 427.201784ms
	I1205 20:52:01.643169   47365 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:01.643353   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:01.643435   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.646320   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.646821   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:01.646881   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:01.647001   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:01.647206   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647407   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:01.647555   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:01.647721   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:01.648105   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:01.648135   47365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:01.996428   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:01.996456   47365 machine.go:91] provisioned docker machine in 1.067968652s
	I1205 20:52:01.996468   47365 start.go:300] post-start starting for "default-k8s-diff-port-463614" (driver="kvm2")
	I1205 20:52:01.996482   47365 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:01.996502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:01.996804   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:01.996829   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:01.999880   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000345   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.000378   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.000532   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.000733   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.000872   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.001041   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.088194   47365 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:02.092422   47365 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:02.092447   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:02.092522   47365 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:02.092607   47365 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:02.092692   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:02.100847   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:02.125282   47365 start.go:303] post-start completed in 128.798422ms
	I1205 20:52:02.125308   47365 fix.go:56] fixHost completed within 20.234129302s
	I1205 20:52:02.125334   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.128159   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128506   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.128539   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.128754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.128970   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129157   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.129330   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.129505   47365 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:02.129980   47365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I1205 20:52:02.130001   47365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:02.247134   47365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809522.185244520
	
	I1205 20:52:02.247160   47365 fix.go:206] guest clock: 1701809522.185244520
	I1205 20:52:02.247170   47365 fix.go:219] Guest: 2023-12-05 20:52:02.18524452 +0000 UTC Remote: 2023-12-05 20:52:02.125313647 +0000 UTC m=+165.907305797 (delta=59.930873ms)
	I1205 20:52:02.247193   47365 fix.go:190] guest clock delta is within tolerance: 59.930873ms
	I1205 20:52:02.247199   47365 start.go:83] releasing machines lock for "default-k8s-diff-port-463614", held for 20.356057608s
	I1205 20:52:02.247233   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.247561   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:02.250476   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.250918   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.250952   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.251123   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.251833   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252026   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:02.252117   47365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:02.252168   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.252434   47365 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:02.252461   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:02.255221   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255382   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255711   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.255750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.255870   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.255949   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:02.256004   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:02.256060   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256278   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:02.256288   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256453   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:02.256447   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.256586   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:02.256698   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:02.343546   47365 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:02.368171   47365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:02.518472   47365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:02.524733   47365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:02.524808   47365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:02.541607   47365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:02.541632   47365 start.go:475] detecting cgroup driver to use...
	I1205 20:52:02.541703   47365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:02.560122   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:02.575179   47365 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:02.575244   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:02.591489   47365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:02.606022   47365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:02.711424   47365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:02.828436   47365 docker.go:219] disabling docker service ...
	I1205 20:52:02.828515   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:02.844209   47365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:02.860693   47365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:02.979799   47365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:03.111682   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:03.128706   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:03.147984   47365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:03.148057   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.160998   47365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:03.161068   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.173347   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.185126   47365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:03.195772   47365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:03.206308   47365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:03.215053   47365 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:03.215103   47365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:03.227755   47365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:03.237219   47365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:03.369712   47365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:03.561508   47365 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:03.561575   47365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:03.569369   47365 start.go:543] Will wait 60s for crictl version
	I1205 20:52:03.569437   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:52:03.575388   47365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:03.618355   47365 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:03.618458   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.670174   47365 ssh_runner.go:195] Run: crio --version
	I1205 20:52:03.716011   47365 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:02.272006   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Start
	I1205 20:52:02.272171   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring networks are active...
	I1205 20:52:02.272890   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network default is active
	I1205 20:52:02.273264   46374 main.go:141] libmachine: (embed-certs-331495) Ensuring network mk-embed-certs-331495 is active
	I1205 20:52:02.273634   46374 main.go:141] libmachine: (embed-certs-331495) Getting domain xml...
	I1205 20:52:02.274223   46374 main.go:141] libmachine: (embed-certs-331495) Creating domain...
	I1205 20:52:03.644135   46374 main.go:141] libmachine: (embed-certs-331495) Waiting to get IP...
	I1205 20:52:03.645065   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.645451   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.645561   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.645439   48036 retry.go:31] will retry after 246.973389ms: waiting for machine to come up
	I1205 20:52:03.894137   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:03.894708   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:03.894813   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:03.894768   48036 retry.go:31] will retry after 353.753964ms: waiting for machine to come up
	I1205 20:52:04.250496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.251201   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.251231   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.251151   48036 retry.go:31] will retry after 370.705045ms: waiting for machine to come up
	I1205 20:52:04.623959   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:04.624532   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:04.624563   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:04.624488   48036 retry.go:31] will retry after 409.148704ms: waiting for machine to come up
	I1205 20:52:05.035991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.036492   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.036521   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.036458   48036 retry.go:31] will retry after 585.089935ms: waiting for machine to come up
	I1205 20:52:01.272757   46866 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.1: (1.865397348s)
	I1205 20:52:01.272791   46866 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17731-6237/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 from cache
	I1205 20:52:01.272823   46866 cache_images.go:123] Successfully loaded all cached images
	I1205 20:52:01.272830   46866 cache_images.go:92] LoadImages completed in 17.860858219s
	I1205 20:52:01.272913   46866 ssh_runner.go:195] Run: crio config
	I1205 20:52:01.346651   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:01.346671   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:01.346689   46866 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:01.346715   46866 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.162 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-143651 NodeName:no-preload-143651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:01.346890   46866 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-143651"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:01.347005   46866 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-143651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:01.347080   46866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1205 20:52:01.360759   46866 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:01.360818   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:01.372537   46866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1205 20:52:01.389057   46866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1205 20:52:01.405689   46866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1205 20:52:01.426066   46866 ssh_runner.go:195] Run: grep 192.168.61.162	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:01.430363   46866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:01.443015   46866 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651 for IP: 192.168.61.162
	I1205 20:52:01.443049   46866 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:01.443202   46866 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:01.443254   46866 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:01.443337   46866 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.key
	I1205 20:52:01.443423   46866 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key.5bf94fca
	I1205 20:52:01.443477   46866 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key
	I1205 20:52:01.443626   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:01.443664   46866 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:01.443689   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:01.443729   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:01.443768   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:01.443800   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:01.443868   46866 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:01.444505   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:01.471368   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:01.495925   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:01.520040   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:01.542515   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:01.565061   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:01.592011   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:01.615244   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:01.640425   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:01.666161   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:01.688991   46866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:01.711978   46866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:01.728642   46866 ssh_runner.go:195] Run: openssl version
	I1205 20:52:01.734248   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:01.746741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751589   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.751647   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:01.757299   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:01.768280   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:01.779234   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783897   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.783961   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:01.789668   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:01.800797   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:01.814741   46866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819713   46866 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.819774   46866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:01.825538   46866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:01.836443   46866 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:01.842191   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:01.850025   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:01.857120   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:01.863507   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:01.870887   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:01.878657   46866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:01.886121   46866 kubeadm.go:404] StartCluster: {Name:no-preload-143651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.1 ClusterName:no-preload-143651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:01.886245   46866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:01.886311   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:01.933026   46866 cri.go:89] found id: ""
	I1205 20:52:01.933096   46866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:01.946862   46866 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:01.946891   46866 kubeadm.go:636] restartCluster start
	I1205 20:52:01.946950   46866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:01.959468   46866 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.960467   46866 kubeconfig.go:92] found "no-preload-143651" server: "https://192.168.61.162:8443"
	I1205 20:52:01.962804   46866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:01.975351   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.975427   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:01.988408   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:01.988439   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:01.988493   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.001669   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:02.502716   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:02.502781   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:02.515220   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.002777   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.002843   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.016667   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.501748   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:03.501840   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:03.515761   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.001797   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.001873   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.018140   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:04.502697   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:04.502791   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:04.518059   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.002414   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.002515   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.021107   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:05.502637   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:05.502733   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:05.521380   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:03.717595   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetIP
	I1205 20:52:03.720774   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721210   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:03.721242   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:03.721414   47365 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:03.726330   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:03.738414   47365 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:03.738479   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:03.777318   47365 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:03.777380   47365 ssh_runner.go:195] Run: which lz4
	I1205 20:52:03.781463   47365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:03.785728   47365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:03.785759   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:05.712791   47365 crio.go:444] Took 1.931355 seconds to copy over tarball
	I1205 20:52:05.712888   47365 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:06.939842   46700 retry.go:31] will retry after 8.345823287s: kubelet not initialised
	I1205 20:52:05.623348   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:05.623894   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:05.623928   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:05.623844   48036 retry.go:31] will retry after 819.796622ms: waiting for machine to come up
	I1205 20:52:06.445034   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:06.445471   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:06.445504   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:06.445427   48036 retry.go:31] will retry after 716.017152ms: waiting for machine to come up
	I1205 20:52:07.162965   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:07.163496   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:07.163526   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:07.163445   48036 retry.go:31] will retry after 1.085415508s: waiting for machine to come up
	I1205 20:52:08.250373   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:08.250962   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:08.250999   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:08.250909   48036 retry.go:31] will retry after 1.128069986s: waiting for machine to come up
	I1205 20:52:09.380537   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:09.381001   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:09.381027   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:09.380964   48036 retry.go:31] will retry after 1.475239998s: waiting for machine to come up
	I1205 20:52:06.002168   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.002247   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.025123   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:06.502715   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:06.502831   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:06.519395   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.001937   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.002068   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.019028   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:07.501962   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:07.502059   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:07.515098   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.002769   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.002909   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.020137   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.501807   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:08.501949   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:08.518082   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.002421   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.002505   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.016089   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.502171   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.502261   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.515449   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.001975   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.002117   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.013831   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.502398   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.502481   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.514939   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:08.946250   47365 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.233316669s)
	I1205 20:52:08.946291   47365 crio.go:451] Took 3.233468 seconds to extract the tarball
	I1205 20:52:08.946304   47365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:08.988526   47365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:09.041782   47365 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:09.041812   47365 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:09.041908   47365 ssh_runner.go:195] Run: crio config
	I1205 20:52:09.105852   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:09.105879   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:09.105901   47365 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:09.105926   47365 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-463614 NodeName:default-k8s-diff-port-463614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:09.106114   47365 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-463614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:09.106218   47365 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-463614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1205 20:52:09.106295   47365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:09.116476   47365 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:09.116569   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:09.125304   47365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (387 bytes)
	I1205 20:52:09.141963   47365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:09.158882   47365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2112 bytes)
	I1205 20:52:09.177829   47365 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:09.181803   47365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:09.194791   47365 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614 for IP: 192.168.39.27
	I1205 20:52:09.194824   47365 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:09.194968   47365 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:09.195028   47365 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:09.195135   47365 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.key
	I1205 20:52:09.195225   47365 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key.310d49ea
	I1205 20:52:09.195287   47365 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key
	I1205 20:52:09.195457   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:09.195502   47365 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:09.195519   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:09.195561   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:09.195594   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:09.195625   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:09.195698   47365 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:09.196495   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:09.221945   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:09.249557   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:09.279843   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:09.309602   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:09.338163   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:09.365034   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:09.394774   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:09.420786   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:09.445787   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:09.474838   47365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:09.499751   47365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:09.523805   47365 ssh_runner.go:195] Run: openssl version
	I1205 20:52:09.530143   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:09.545184   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550681   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.550751   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:09.558670   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:09.573789   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:09.585134   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591055   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.591136   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:09.597286   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:09.608901   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:09.620949   47365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626190   47365 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.626267   47365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:09.632394   47365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:09.645362   47365 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:09.650768   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:09.657084   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:09.663183   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:09.669093   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:09.675365   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:09.681992   47365 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:09.688849   47365 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-463614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-463614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:09.688963   47365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:09.689035   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:09.730999   47365 cri.go:89] found id: ""
	I1205 20:52:09.731061   47365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:09.741609   47365 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:09.741640   47365 kubeadm.go:636] restartCluster start
	I1205 20:52:09.741700   47365 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:09.751658   47365 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.752671   47365 kubeconfig.go:92] found "default-k8s-diff-port-463614" server: "https://192.168.39.27:8444"
	I1205 20:52:09.755361   47365 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:09.765922   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.766006   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.781956   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:09.781983   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:09.782033   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:09.795265   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.295986   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.296088   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.312309   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.795832   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:10.795959   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:10.808880   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:10.857552   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:10.857968   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:10.858002   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:10.857911   48036 retry.go:31] will retry after 1.882319488s: waiting for machine to come up
	I1205 20:52:12.741608   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:12.742051   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:12.742081   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:12.742006   48036 retry.go:31] will retry after 2.598691975s: waiting for machine to come up
	I1205 20:52:15.343818   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:15.344360   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:15.344385   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:15.344306   48036 retry.go:31] will retry after 3.313897625s: waiting for machine to come up
	I1205 20:52:11.002661   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.002740   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.014931   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.502548   46866 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.502621   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.516090   46866 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.975668   46866 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:11.975724   46866 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:11.975739   46866 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:11.975820   46866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:12.032265   46866 cri.go:89] found id: ""
	I1205 20:52:12.032364   46866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:12.050705   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:12.060629   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:12.060726   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.073988   46866 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:12.074015   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:12.209842   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.318235   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108353469s)
	I1205 20:52:13.318280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.518224   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.606064   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:13.695764   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:13.695849   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:13.718394   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.237554   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:14.737066   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:15.236911   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:11.295662   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.295754   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.308889   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:11.796322   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:11.796432   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:11.812351   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.295433   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.295527   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.308482   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:12.795889   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:12.795961   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:12.812458   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.296017   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.296114   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.312758   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:13.796111   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:13.796256   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:13.812247   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.295726   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.295808   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.308712   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:14.796358   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:14.796439   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:14.813173   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.295541   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.295632   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.312665   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.796231   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:15.796378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:15.816767   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:15.292395   46700 retry.go:31] will retry after 12.309806949s: kubelet not initialised
	I1205 20:52:18.659431   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:18.659915   46374 main.go:141] libmachine: (embed-certs-331495) DBG | unable to find current IP address of domain embed-certs-331495 in network mk-embed-certs-331495
	I1205 20:52:18.659944   46374 main.go:141] libmachine: (embed-certs-331495) DBG | I1205 20:52:18.659867   48036 retry.go:31] will retry after 3.672641091s: waiting for machine to come up
	I1205 20:52:15.737064   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.237656   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:16.263010   46866 api_server.go:72] duration metric: took 2.567245952s to wait for apiserver process to appear ...
	I1205 20:52:16.263039   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:16.263057   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.286115   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.286153   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.286173   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.334683   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:19.334710   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:19.835110   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:19.840833   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:19.840866   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.335444   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.355923   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:20.355956   46866 api_server.go:103] status: https://192.168.61.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:20.835568   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:52:20.840974   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:52:20.849239   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:52:20.849274   46866 api_server.go:131] duration metric: took 4.586226618s to wait for apiserver health ...
	I1205 20:52:20.849284   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:52:20.849323   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:20.850829   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:16.295650   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.295729   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.312742   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:16.796283   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:16.796364   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:16.812822   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.295879   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.295953   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.312254   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:17.795437   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:17.795519   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:17.808598   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.296187   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.296266   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.312808   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:18.796368   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:18.796480   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:18.812986   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.295511   47365 api_server.go:166] Checking apiserver status ...
	I1205 20:52:19.295576   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:19.308830   47365 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:19.766569   47365 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:19.766653   47365 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:19.766673   47365 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:19.766748   47365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:19.820510   47365 cri.go:89] found id: ""
	I1205 20:52:19.820590   47365 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:19.842229   47365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:19.853234   47365 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:19.853293   47365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866181   47365 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:19.866220   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:20.022098   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.165439   47365 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143295704s)
	I1205 20:52:21.165472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:22.333575   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334146   46374 main.go:141] libmachine: (embed-certs-331495) Found IP for machine: 192.168.72.180
	I1205 20:52:22.334189   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has current primary IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.334205   46374 main.go:141] libmachine: (embed-certs-331495) Reserving static IP address...
	I1205 20:52:22.334654   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.334686   46374 main.go:141] libmachine: (embed-certs-331495) DBG | skip adding static IP to network mk-embed-certs-331495 - found existing host DHCP lease matching {name: "embed-certs-331495", mac: "52:54:00:95:87:db", ip: "192.168.72.180"}
	I1205 20:52:22.334699   46374 main.go:141] libmachine: (embed-certs-331495) Reserved static IP address: 192.168.72.180
	I1205 20:52:22.334717   46374 main.go:141] libmachine: (embed-certs-331495) Waiting for SSH to be available...
	I1205 20:52:22.334727   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Getting to WaitForSSH function...
	I1205 20:52:22.337411   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337832   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.337863   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.337976   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH client type: external
	I1205 20:52:22.338005   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Using SSH private key: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa (-rw-------)
	I1205 20:52:22.338038   46374 main.go:141] libmachine: (embed-certs-331495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:52:22.338057   46374 main.go:141] libmachine: (embed-certs-331495) DBG | About to run SSH command:
	I1205 20:52:22.338071   46374 main.go:141] libmachine: (embed-certs-331495) DBG | exit 0
	I1205 20:52:22.430984   46374 main.go:141] libmachine: (embed-certs-331495) DBG | SSH cmd err, output: <nil>: 
	I1205 20:52:22.431374   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetConfigRaw
	I1205 20:52:22.432120   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.435317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.435737   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.435772   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.436044   46374 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/config.json ...
	I1205 20:52:22.436283   46374 machine.go:88] provisioning docker machine ...
	I1205 20:52:22.436304   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:22.436519   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436687   46374 buildroot.go:166] provisioning hostname "embed-certs-331495"
	I1205 20:52:22.436707   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.436882   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.439595   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.439966   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.439998   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.440179   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.440392   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440558   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.440718   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.440891   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.441216   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.441235   46374 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-331495 && echo "embed-certs-331495" | sudo tee /etc/hostname
	I1205 20:52:22.584600   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-331495
	
	I1205 20:52:22.584662   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.587640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588053   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.588083   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.588255   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.588469   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.588834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.588985   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:22.589340   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:22.589369   46374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-331495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-331495/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-331495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:52:22.722352   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:52:22.722390   46374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17731-6237/.minikube CaCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17731-6237/.minikube}
	I1205 20:52:22.722437   46374 buildroot.go:174] setting up certificates
	I1205 20:52:22.722459   46374 provision.go:83] configureAuth start
	I1205 20:52:22.722475   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetMachineName
	I1205 20:52:22.722776   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:22.725826   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726254   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.726313   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.726616   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.729267   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729606   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.729640   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.729798   46374 provision.go:138] copyHostCerts
	I1205 20:52:22.729843   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem, removing ...
	I1205 20:52:22.729853   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem
	I1205 20:52:22.729907   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/ca.pem (1078 bytes)
	I1205 20:52:22.729986   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem, removing ...
	I1205 20:52:22.729994   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem
	I1205 20:52:22.730019   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/cert.pem (1123 bytes)
	I1205 20:52:22.730090   46374 exec_runner.go:144] found /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem, removing ...
	I1205 20:52:22.730100   46374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem
	I1205 20:52:22.730128   46374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17731-6237/.minikube/key.pem (1675 bytes)
	I1205 20:52:22.730188   46374 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem org=jenkins.embed-certs-331495 san=[192.168.72.180 192.168.72.180 localhost 127.0.0.1 minikube embed-certs-331495]
	I1205 20:52:22.795361   46374 provision.go:172] copyRemoteCerts
	I1205 20:52:22.795435   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:52:22.795464   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:22.798629   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799006   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:22.799052   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:22.799222   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:22.799448   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:22.799617   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:22.799774   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:22.892255   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 20:52:22.929940   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:52:22.966087   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1205 20:52:22.998887   46374 provision.go:86] duration metric: configureAuth took 276.409362ms
	I1205 20:52:22.998937   46374 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:52:22.999160   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:22.999253   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.002604   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.002992   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.003033   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.003265   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.003516   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003723   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.003916   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.004090   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.004540   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.004568   46374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:52:23.371418   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:52:23.371450   46374 machine.go:91] provisioned docker machine in 935.149228ms
	I1205 20:52:23.371464   46374 start.go:300] post-start starting for "embed-certs-331495" (driver="kvm2")
	I1205 20:52:23.371477   46374 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:52:23.371500   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.371872   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:52:23.371911   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.375440   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.375960   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.375991   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.376130   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.376328   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.376512   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.376693   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.472304   46374 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:52:23.477044   46374 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 20:52:23.477070   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/addons for local assets ...
	I1205 20:52:23.477177   46374 filesync.go:126] Scanning /home/jenkins/minikube-integration/17731-6237/.minikube/files for local assets ...
	I1205 20:52:23.477287   46374 filesync.go:149] local asset: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem -> 134102.pem in /etc/ssl/certs
	I1205 20:52:23.477425   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:52:23.493987   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:23.519048   46374 start.go:303] post-start completed in 147.566985ms
	I1205 20:52:23.519082   46374 fix.go:56] fixHost completed within 21.27172194s
	I1205 20:52:23.519107   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.522260   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522700   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.522735   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.522967   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.523238   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523456   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.523659   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.523893   46374 main.go:141] libmachine: Using SSH client type: native
	I1205 20:52:23.524220   46374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.180 22 <nil> <nil>}
	I1205 20:52:23.524239   46374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1205 20:52:23.648717   46374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701809543.591713401
	
	I1205 20:52:23.648743   46374 fix.go:206] guest clock: 1701809543.591713401
	I1205 20:52:23.648755   46374 fix.go:219] Guest: 2023-12-05 20:52:23.591713401 +0000 UTC Remote: 2023-12-05 20:52:23.519087629 +0000 UTC m=+358.020977056 (delta=72.625772ms)
	I1205 20:52:23.648800   46374 fix.go:190] guest clock delta is within tolerance: 72.625772ms
	I1205 20:52:23.648808   46374 start.go:83] releasing machines lock for "embed-certs-331495", held for 21.401495157s
	I1205 20:52:23.648838   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.649149   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:23.652098   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652534   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.652577   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.652773   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653350   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653552   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:52:23.653655   46374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:52:23.653709   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.653948   46374 ssh_runner.go:195] Run: cat /version.json
	I1205 20:52:23.653989   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:52:23.657266   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657547   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657637   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657669   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.657946   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:23.657957   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.657970   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:23.658236   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:52:23.658250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658438   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:52:23.658532   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658756   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.658785   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:52:23.658933   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:52:23.777965   46374 ssh_runner.go:195] Run: systemctl --version
	I1205 20:52:23.784199   46374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:52:23.948621   46374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:52:23.957081   46374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:52:23.957163   46374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:52:23.978991   46374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:52:23.979023   46374 start.go:475] detecting cgroup driver to use...
	I1205 20:52:23.979124   46374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:52:23.997195   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:52:24.015420   46374 docker.go:203] disabling cri-docker service (if available) ...
	I1205 20:52:24.015494   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:52:24.031407   46374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:52:24.047587   46374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:52:24.200996   46374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:52:24.332015   46374 docker.go:219] disabling docker service ...
	I1205 20:52:24.332095   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:52:24.350586   46374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:52:24.367457   46374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:52:24.545467   46374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:52:24.733692   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:52:24.748391   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:52:24.768555   46374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1205 20:52:24.768644   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.780668   46374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:52:24.780740   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.792671   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.806500   46374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:52:24.818442   46374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:52:24.829822   46374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:52:24.842070   46374 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:52:24.842138   46374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:52:24.857370   46374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:52:24.867993   46374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:52:25.024629   46374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:52:25.231556   46374 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:52:25.231630   46374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:52:25.237863   46374 start.go:543] Will wait 60s for crictl version
	I1205 20:52:25.237929   46374 ssh_runner.go:195] Run: which crictl
	I1205 20:52:25.242501   46374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:52:25.289507   46374 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1205 20:52:25.289591   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.340432   46374 ssh_runner.go:195] Run: crio --version
	I1205 20:52:25.398354   46374 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1205 20:52:25.399701   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetIP
	I1205 20:52:25.402614   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.402997   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:52:25.403029   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:52:25.403259   46374 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 20:52:25.407873   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:25.420725   46374 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 20:52:25.420801   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:25.468651   46374 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1205 20:52:25.468726   46374 ssh_runner.go:195] Run: which lz4
	I1205 20:52:25.473976   46374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1205 20:52:25.478835   46374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:52:25.478871   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1205 20:52:20.852220   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:20.867614   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:20.892008   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:20.912985   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:20.913027   46866 system_pods.go:61] "coredns-76f75df574-8d24t" [10265d3b-ddf0-4559-8194-d42563df88a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:20.913038   46866 system_pods.go:61] "etcd-no-preload-143651" [a6b62f23-a944-41ec-b465-6027fcf1f413] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:20.913051   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [5a6b5874-6c6b-4ed6-aa68-8e7fc35a486e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:20.913061   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [42b01d8c-2d8f-467e-8183-eef2e6f73b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:20.913074   46866 system_pods.go:61] "kube-proxy-mltvl" [9adea5d0-e824-40ff-b5b4-16f84fd439ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:20.913085   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [17474fca-8390-48db-bebe-47c1e2cf7b26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:20.913107   46866 system_pods.go:61] "metrics-server-57f55c9bc5-mhxpn" [3eb25a58-bea3-4266-9bf8-8f186ee65e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:20.913120   46866 system_pods.go:61] "storage-provisioner" [cfe9d24c-a534-4778-980b-99f7addcf0b9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:20.913132   46866 system_pods.go:74] duration metric: took 21.101691ms to wait for pod list to return data ...
	I1205 20:52:20.913143   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:20.917108   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:20.917140   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:20.917156   46866 node_conditions.go:105] duration metric: took 4.003994ms to run NodePressure ...
	I1205 20:52:20.917180   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.315507   46866 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321271   46866 kubeadm.go:787] kubelet initialised
	I1205 20:52:21.321301   46866 kubeadm.go:788] duration metric: took 5.763416ms waiting for restarted kubelet to initialise ...
	I1205 20:52:21.321310   46866 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:21.327760   46866 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:23.354192   46866 pod_ready.go:102] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:25.353274   46866 pod_ready.go:92] pod "coredns-76f75df574-8d24t" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:25.353356   46866 pod_ready.go:81] duration metric: took 4.02555842s waiting for pod "coredns-76f75df574-8d24t" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:25.353372   46866 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:21.402472   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.498902   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:21.585971   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:21.586073   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:21.605993   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.120378   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:22.620326   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.119466   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:23.619549   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.120228   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:24.143130   47365 api_server.go:72] duration metric: took 2.557157382s to wait for apiserver process to appear ...
	I1205 20:52:24.143163   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:24.143182   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:27.608165   46700 retry.go:31] will retry after 7.717398196s: kubelet not initialised
	I1205 20:52:28.335417   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:28.335446   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:28.335457   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.429478   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.429507   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:28.929996   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:28.936475   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:28.936525   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.430308   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.437787   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:29.437838   47365 api_server.go:103] status: https://192.168.39.27:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:29.930326   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:52:29.942625   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:52:29.953842   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:29.953875   47365 api_server.go:131] duration metric: took 5.810704359s to wait for apiserver health ...
	I1205 20:52:29.953889   47365 cni.go:84] Creating CNI manager for ""
	I1205 20:52:29.953904   47365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:29.955505   47365 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:27.326223   46374 crio.go:444] Took 1.852284 seconds to copy over tarball
	I1205 20:52:27.326333   46374 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:52:27.374784   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:29.378733   46866 pod_ready.go:102] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:30.375181   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:30.375266   46866 pod_ready.go:81] duration metric: took 5.021883955s waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.375316   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:29.956914   47365 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:29.981391   47365 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:30.016634   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:30.030957   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:30.031030   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:30.031047   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:30.031069   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:30.031088   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:30.031117   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:30.031135   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:30.031148   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:30.031165   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:30.031177   47365 system_pods.go:74] duration metric: took 14.513879ms to wait for pod list to return data ...
	I1205 20:52:30.031190   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:30.035458   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:30.035493   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:30.035506   47365 node_conditions.go:105] duration metric: took 4.295594ms to run NodePressure ...
	I1205 20:52:30.035525   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:30.302125   47365 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307852   47365 kubeadm.go:787] kubelet initialised
	I1205 20:52:30.307875   47365 kubeadm.go:788] duration metric: took 5.724991ms waiting for restarted kubelet to initialise ...
	I1205 20:52:30.307883   47365 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:30.316621   47365 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.323682   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323716   47365 pod_ready.go:81] duration metric: took 7.060042ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.323728   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.323736   47365 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.338909   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338945   47365 pod_ready.go:81] duration metric: took 15.198541ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.338967   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.338977   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.349461   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349491   47365 pod_ready.go:81] duration metric: took 10.504515ms waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.349505   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.349513   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:30.422520   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422553   47365 pod_ready.go:81] duration metric: took 73.030993ms waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:30.422569   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:30.422588   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.212527   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212553   47365 pod_ready.go:81] duration metric: took 789.956497ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.212564   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-proxy-g4zct" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.212575   47365 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:31.727110   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727140   47365 pod_ready.go:81] duration metric: took 514.553589ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:31.727154   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:31.727162   47365 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.168658   47365 pod_ready.go:97] node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168695   47365 pod_ready.go:81] duration metric: took 441.52358ms waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:52:32.168711   47365 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-463614" hosting pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:32.168720   47365 pod_ready.go:38] duration metric: took 1.860826751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:32.168747   47365 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:52:32.182053   47365 ops.go:34] apiserver oom_adj: -16
	I1205 20:52:32.182075   47365 kubeadm.go:640] restartCluster took 22.440428452s
	I1205 20:52:32.182083   47365 kubeadm.go:406] StartCluster complete in 22.493245354s
	I1205 20:52:32.182130   47365 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.182208   47365 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:52:32.184035   47365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:32.290773   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:52:32.290931   47365 config.go:182] Loaded profile config "default-k8s-diff-port-463614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:52:32.290921   47365 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:52:32.291055   47365 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291079   47365 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291088   47365 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-463614"
	I1205 20:52:32.291099   47365 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-463614"
	I1205 20:52:32.291123   47365 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291133   47365 addons.go:240] addon metrics-server should already be in state true
	I1205 20:52:32.291177   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291093   47365 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.291220   47365 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:52:32.291298   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.291586   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291607   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291633   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291635   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.291713   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.291739   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.311298   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I1205 20:52:32.311514   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35601
	I1205 20:52:32.311541   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1205 20:52:32.311733   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.311932   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312026   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.312291   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312325   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312434   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312456   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312487   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.312501   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.312688   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312763   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312833   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.312942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.313276   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313300   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.313359   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.313390   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.316473   47365 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-463614"
	W1205 20:52:32.316493   47365 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:52:32.316520   47365 host.go:66] Checking if "default-k8s-diff-port-463614" exists ...
	I1205 20:52:32.317093   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.317125   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.328598   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I1205 20:52:32.329097   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.329225   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I1205 20:52:32.329589   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.329608   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.329674   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.330230   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.330248   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.330298   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330484   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330553   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.330719   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.330908   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I1205 20:52:32.331201   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.331935   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.331953   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.332351   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.332472   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.332653   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.512055   47365 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:52:32.333098   47365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:52:32.511993   47365 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:52:32.536814   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:52:32.512201   47365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:52:32.536942   47365 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.536958   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:52:32.536985   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.536843   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:52:32.537043   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.541412   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541780   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.541924   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.541958   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542190   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542369   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.542394   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.542434   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.542641   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.542748   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.542905   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.542939   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.543088   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.543246   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.554014   47365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I1205 20:52:32.554513   47365 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:52:32.554975   47365 main.go:141] libmachine: Using API Version  1
	I1205 20:52:32.555007   47365 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:52:32.555387   47365 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:52:32.555634   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetState
	I1205 20:52:32.557606   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .DriverName
	I1205 20:52:32.557895   47365 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.557911   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:52:32.557936   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHHostname
	I1205 20:52:32.561075   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561502   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:7f:07", ip: ""} in network mk-default-k8s-diff-port-463614: {Iface:virbr1 ExpiryTime:2023-12-05 21:51:55 +0000 UTC Type:0 Mac:52:54:00:98:7f:07 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:default-k8s-diff-port-463614 Clientid:01:52:54:00:98:7f:07}
	I1205 20:52:32.561553   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | domain default-k8s-diff-port-463614 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:7f:07 in network mk-default-k8s-diff-port-463614
	I1205 20:52:32.561735   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHPort
	I1205 20:52:32.561942   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHKeyPath
	I1205 20:52:32.562135   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .GetSSHUsername
	I1205 20:52:32.562338   47365 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/default-k8s-diff-port-463614/id_rsa Username:docker}
	I1205 20:52:32.673513   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:52:32.682442   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:52:32.682472   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:52:32.706007   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:52:32.726379   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:52:32.726413   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:52:32.779247   47365 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1205 20:52:32.780175   47365 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-463614" context rescaled to 1 replicas
	I1205 20:52:32.780220   47365 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:52:32.787518   47365 out.go:177] * Verifying Kubernetes components...
	I1205 20:52:32.790046   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:52:32.796219   47365 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:32.796248   47365 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:52:32.854438   47365 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:52:34.594203   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.920648219s)
	I1205 20:52:34.594267   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594294   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594294   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.888240954s)
	I1205 20:52:34.594331   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594343   47365 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.80425984s)
	I1205 20:52:34.594373   47365 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:34.594350   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594710   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594729   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594750   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.594755   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.594772   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.594783   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594801   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.594754   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.594860   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.595134   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595195   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.595229   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595238   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.595356   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.595375   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.610358   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.610390   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.610651   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.610677   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689242   47365 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.834763203s)
	I1205 20:52:34.689294   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689309   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.689648   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.689698   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.689717   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.689740   47365 main.go:141] libmachine: Making call to close driver server
	I1205 20:52:34.689754   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) Calling .Close
	I1205 20:52:34.690020   47365 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:52:34.690025   47365 main.go:141] libmachine: (default-k8s-diff-port-463614) DBG | Closing plugin on server side
	I1205 20:52:34.690035   47365 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:52:34.690046   47365 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-463614"
	I1205 20:52:34.692072   47365 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 20:52:30.639619   46374 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.313251826s)
	I1205 20:52:30.641314   46374 crio.go:451] Took 3.315054 seconds to extract the tarball
	I1205 20:52:30.641328   46374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:52:30.687076   46374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:52:30.745580   46374 crio.go:496] all images are preloaded for cri-o runtime.
	I1205 20:52:30.745603   46374 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:52:30.745681   46374 ssh_runner.go:195] Run: crio config
	I1205 20:52:30.807631   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:30.807656   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:30.807674   46374 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1205 20:52:30.807692   46374 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-331495 NodeName:embed-certs-331495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:52:30.807828   46374 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-331495"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:52:30.807897   46374 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-331495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1205 20:52:30.807958   46374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1205 20:52:30.820571   46374 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:52:30.820679   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:52:30.831881   46374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1205 20:52:30.852058   46374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:52:30.870516   46374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1205 20:52:30.888000   46374 ssh_runner.go:195] Run: grep 192.168.72.180	control-plane.minikube.internal$ /etc/hosts
	I1205 20:52:30.892529   46374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:52:30.904910   46374 certs.go:56] Setting up /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495 for IP: 192.168.72.180
	I1205 20:52:30.904950   46374 certs.go:190] acquiring lock for shared ca certs: {Name:mk9f340d3bf614699fed38c32e2f7e6c8922e117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:52:30.905143   46374 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key
	I1205 20:52:30.905197   46374 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key
	I1205 20:52:30.905280   46374 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/client.key
	I1205 20:52:30.905336   46374 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key.379caec1
	I1205 20:52:30.905368   46374 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key
	I1205 20:52:30.905463   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem (1338 bytes)
	W1205 20:52:30.905489   46374 certs.go:433] ignoring /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410_empty.pem, impossibly tiny 0 bytes
	I1205 20:52:30.905499   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:52:30.905525   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/ca.pem (1078 bytes)
	I1205 20:52:30.905550   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:52:30.905572   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/certs/home/jenkins/minikube-integration/17731-6237/.minikube/certs/key.pem (1675 bytes)
	I1205 20:52:30.905609   46374 certs.go:437] found cert: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem (1708 bytes)
	I1205 20:52:30.906129   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1205 20:52:30.930322   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:52:30.953120   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:52:30.976792   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/embed-certs-331495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:52:31.000462   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:52:31.025329   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:52:31.050451   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:52:31.075644   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:52:31.101693   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/ssl/certs/134102.pem --> /usr/share/ca-certificates/134102.pem (1708 bytes)
	I1205 20:52:31.125712   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:52:31.149721   46374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17731-6237/.minikube/certs/13410.pem --> /usr/share/ca-certificates/13410.pem (1338 bytes)
	I1205 20:52:31.173466   46374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:52:31.191836   46374 ssh_runner.go:195] Run: openssl version
	I1205 20:52:31.197909   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134102.pem && ln -fs /usr/share/ca-certificates/134102.pem /etc/ssl/certs/134102.pem"
	I1205 20:52:31.212206   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219081   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  5 19:49 /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.219155   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134102.pem
	I1205 20:52:31.225423   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134102.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:52:31.239490   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:52:31.251505   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256613   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.256678   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:52:31.262730   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:52:31.274879   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13410.pem && ln -fs /usr/share/ca-certificates/13410.pem /etc/ssl/certs/13410.pem"
	I1205 20:52:31.286201   46374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291593   46374 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  5 19:49 /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.291658   46374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13410.pem
	I1205 20:52:31.298904   46374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13410.pem /etc/ssl/certs/51391683.0"
	I1205 20:52:31.310560   46374 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1205 20:52:31.315670   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:52:31.322461   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:52:31.328590   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:52:31.334580   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:52:31.341827   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:52:31.348456   46374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:52:31.354835   46374 kubeadm.go:404] StartCluster: {Name:embed-certs-331495 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-331495 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 20:52:31.354945   46374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:52:31.355024   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:31.396272   46374 cri.go:89] found id: ""
	I1205 20:52:31.396346   46374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:52:31.406603   46374 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1205 20:52:31.406629   46374 kubeadm.go:636] restartCluster start
	I1205 20:52:31.406683   46374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 20:52:31.417671   46374 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.419068   46374 kubeconfig.go:92] found "embed-certs-331495" server: "https://192.168.72.180:8443"
	I1205 20:52:31.421304   46374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 20:52:31.432188   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.432260   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.445105   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.445132   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.445182   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.457857   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:31.958205   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:31.958322   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:31.972477   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.458645   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.458732   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.475471   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.958778   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:32.958872   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:32.973340   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.458838   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.458924   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.475090   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:33.958680   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:33.958776   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:33.974789   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.458297   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.458371   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.471437   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:34.958961   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:34.959030   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:34.972007   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:35.458648   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.458729   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.471573   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:32.362684   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.362706   46866 pod_ready.go:81] duration metric: took 1.98737949s waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.362715   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368694   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.368717   46866 pod_ready.go:81] duration metric: took 5.993796ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.368726   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375418   46866 pod_ready.go:92] pod "kube-proxy-mltvl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.375442   46866 pod_ready.go:81] duration metric: took 6.709035ms waiting for pod "kube-proxy-mltvl" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.375452   46866 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383393   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:32.383418   46866 pod_ready.go:81] duration metric: took 7.957397ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:32.383430   46866 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:34.497914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:34.693693   47365 addons.go:502] enable addons completed in 2.40279745s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 20:52:35.331317   46700 retry.go:31] will retry after 13.122920853s: kubelet not initialised
	I1205 20:52:35.958930   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:35.959020   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:35.971607   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.458135   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.458202   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.475097   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.958621   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:36.958703   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:36.974599   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.458670   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.458790   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.472296   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:37.958470   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:37.958561   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:37.971241   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.458862   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.458957   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.471475   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:38.958727   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:38.958807   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:38.971366   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.458991   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.459084   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.471352   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:39.958955   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:39.959052   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:39.972803   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:40.458181   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.458251   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.470708   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:36.499335   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:38.996779   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:36.611450   47365 node_ready.go:58] node "default-k8s-diff-port-463614" has status "Ready":"False"
	I1205 20:52:39.111234   47365 node_ready.go:49] node "default-k8s-diff-port-463614" has status "Ready":"True"
	I1205 20:52:39.111266   47365 node_ready.go:38] duration metric: took 4.51686489s waiting for node "default-k8s-diff-port-463614" to be "Ready" ...
	I1205 20:52:39.111278   47365 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:39.117815   47365 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124431   47365 pod_ready.go:92] pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.124455   47365 pod_ready.go:81] duration metric: took 6.615213ms waiting for pod "coredns-5dd5756b68-6pmzf" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.124464   47365 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131301   47365 pod_ready.go:92] pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:39.131340   47365 pod_ready.go:81] duration metric: took 6.85604ms waiting for pod "etcd-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:39.131352   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:41.155265   47365 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:40.958830   46374 api_server.go:166] Checking apiserver status ...
	I1205 20:52:40.958921   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1205 20:52:40.970510   46374 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1205 20:52:41.432806   46374 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1205 20:52:41.432840   46374 kubeadm.go:1135] stopping kube-system containers ...
	I1205 20:52:41.432854   46374 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 20:52:41.432909   46374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:52:41.476486   46374 cri.go:89] found id: ""
	I1205 20:52:41.476550   46374 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 20:52:41.493676   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:52:41.503594   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:52:41.503681   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512522   46374 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1205 20:52:41.512550   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:41.645081   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.368430   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.586289   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.657555   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:42.753020   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:52:42.753103   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:42.767926   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.286111   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:43.786148   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.285601   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:44.785638   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.285508   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:52:45.326812   46374 api_server.go:72] duration metric: took 2.573794156s to wait for apiserver process to appear ...
	I1205 20:52:45.326839   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:52:45.326857   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327337   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:45.327367   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:45.327771   46374 api_server.go:269] stopped: https://192.168.72.180:8443/healthz: Get "https://192.168.72.180:8443/healthz": dial tcp 192.168.72.180:8443: connect: connection refused
	I1205 20:52:40.998702   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:43.508882   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:42.152898   47365 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:42.152926   47365 pod_ready.go:81] duration metric: took 3.021552509s waiting for pod "kube-apiserver-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:42.152939   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320531   47365 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.320632   47365 pod_ready.go:81] duration metric: took 1.167680941s waiting for pod "kube-controller-manager-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.320660   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521255   47365 pod_ready.go:92] pod "kube-proxy-g4zct" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.521286   47365 pod_ready.go:81] duration metric: took 200.606753ms waiting for pod "kube-proxy-g4zct" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.521300   47365 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911946   47365 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:43.911972   47365 pod_ready.go:81] duration metric: took 390.664131ms waiting for pod "kube-scheduler-default-k8s-diff-port-463614" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:43.911983   47365 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:46.220630   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.459426   46700 kubeadm.go:787] kubelet initialised
	I1205 20:52:48.459452   46700 kubeadm.go:788] duration metric: took 53.977281861s waiting for restarted kubelet to initialise ...
	I1205 20:52:48.459460   46700 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:48.465332   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471155   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.471184   46700 pod_ready.go:81] duration metric: took 5.815983ms waiting for pod "coredns-5644d7b6d9-8rth6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.471195   46700 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476833   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.476861   46700 pod_ready.go:81] duration metric: took 5.658311ms waiting for pod "coredns-5644d7b6d9-kqhgk" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.476876   46700 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481189   46700 pod_ready.go:92] pod "etcd-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.481217   46700 pod_ready.go:81] duration metric: took 4.332284ms waiting for pod "etcd-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.481230   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485852   46700 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.485869   46700 pod_ready.go:81] duration metric: took 4.630813ms waiting for pod "kube-apiserver-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.485879   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:45.828213   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.185115   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.185143   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.185156   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.228977   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 20:52:49.229017   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 20:52:49.328278   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.336930   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.336971   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:49.828530   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:49.835188   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:49.835215   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:50.328834   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.337852   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1205 20:52:50.337885   46374 api_server.go:103] status: https://192.168.72.180:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1205 20:52:45.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:47.998466   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.497317   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.828313   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:52:50.835050   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:52:50.844093   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:52:50.844124   46374 api_server.go:131] duration metric: took 5.517278039s to wait for apiserver health ...
	I1205 20:52:50.844134   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:52:50.844141   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:52:50.846047   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:52:48.220942   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.720446   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:48.858954   46700 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:48.858980   46700 pod_ready.go:81] duration metric: took 373.093905ms waiting for pod "kube-controller-manager-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:48.858989   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260468   46700 pod_ready.go:92] pod "kube-proxy-r5n6g" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.260493   46700 pod_ready.go:81] duration metric: took 401.497792ms waiting for pod "kube-proxy-r5n6g" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.260501   46700 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658952   46700 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:49.658977   46700 pod_ready.go:81] duration metric: took 398.469864ms waiting for pod "kube-scheduler-old-k8s-version-061206" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:49.658986   46700 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:51.966947   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:50.848285   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:52:50.865469   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:52:50.918755   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:52:50.951671   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:52:50.951705   46374 system_pods.go:61] "coredns-5dd5756b68-7xr6w" [8300dbf8-413a-4171-9e56-53f0f2d03fd5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 20:52:50.951712   46374 system_pods.go:61] "etcd-embed-certs-331495" [b2802bcb-262e-4d2a-9589-b1b3885de515] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 20:52:50.951722   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [6f9a28a7-8827-4071-8c68-f2671e7a8017] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 20:52:50.951738   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [24e85887-7f58-4a5c-b0d4-4eebd6076a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 20:52:50.951744   46374 system_pods.go:61] "kube-proxy-76qq2" [ffd744ec-9522-443c-b609-b11e24ab9b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 20:52:50.951750   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [aaa502dc-a7cf-4f76-b79f-aa8be1ae48f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 20:52:50.951756   46374 system_pods.go:61] "metrics-server-57f55c9bc5-bcg28" [e60503c2-732d-44a3-b5da-fbf7a0cfd981] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:52:50.951761   46374 system_pods.go:61] "storage-provisioner" [be1aa61b-82e9-4382-ab1c-89e30b801fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:52:50.951767   46374 system_pods.go:74] duration metric: took 32.973877ms to wait for pod list to return data ...
	I1205 20:52:50.951773   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:52:50.971413   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:52:50.971440   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:52:50.971449   46374 node_conditions.go:105] duration metric: took 19.672668ms to run NodePressure ...
	I1205 20:52:50.971465   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 20:52:51.378211   46374 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383462   46374 kubeadm.go:787] kubelet initialised
	I1205 20:52:51.383487   46374 kubeadm.go:788] duration metric: took 5.246601ms waiting for restarted kubelet to initialise ...
	I1205 20:52:51.383495   46374 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:52:51.393558   46374 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:53.414801   46374 pod_ready.go:102] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.426681   46374 pod_ready.go:92] pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace has status "Ready":"True"
	I1205 20:52:55.426710   46374 pod_ready.go:81] duration metric: took 4.033124274s waiting for pod "coredns-5dd5756b68-7xr6w" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:55.426725   46374 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:52:52.498509   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.997539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:53.221825   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:55.723682   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:54.468896   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:56.966471   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.468158   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.469797   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.497582   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.500937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:57.727756   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.727968   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:52:59.466541   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469387   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:01.469996   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.968435   46374 pod_ready.go:102] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.969033   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.969065   46374 pod_ready.go:81] duration metric: took 9.542324599s waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.969073   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975019   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.975041   46374 pod_ready.go:81] duration metric: took 5.961268ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.975049   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980743   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.980771   46374 pod_ready.go:81] duration metric: took 5.713974ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.980779   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985565   46374 pod_ready.go:92] pod "kube-proxy-76qq2" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.985596   46374 pod_ready.go:81] duration metric: took 4.805427ms waiting for pod "kube-proxy-76qq2" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.985610   46374 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992009   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:53:04.992035   46374 pod_ready.go:81] duration metric: took 6.416324ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:04.992047   46374 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	I1205 20:53:01.996877   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.997311   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:02.221319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:04.720314   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:03.966830   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.465943   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:07.272848   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.272897   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:05.997810   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.497408   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:06.722608   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:09.222226   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:08.965894   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.967253   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.466458   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.773608   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.773778   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:10.997547   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:12.999476   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.496736   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:11.721128   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:13.721371   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.221780   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:15.466602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.965160   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:16.272951   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.772527   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:17.497284   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.498006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:18.223073   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.724402   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:19.966424   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.466866   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:20.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:22.772789   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.273369   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:21.997270   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.496150   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:23.221999   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:25.223587   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:24.967755   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.465568   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.772596   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:30.273464   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:26.496470   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.003099   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:27.721654   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.724134   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:29.466332   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.966465   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.773521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:35.272236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:31.497006   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.000663   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:32.221725   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.719806   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:34.466035   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.966501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:37.773436   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.274255   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.496949   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.996265   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:36.721339   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:38.723854   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.221087   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:39.465585   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:41.465785   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.467239   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:42.773263   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:44.773717   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:40.998588   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.496904   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.497783   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:43.222148   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.722122   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:45.966317   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.966572   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.272412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:49.273057   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.997444   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.496708   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:47.722350   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.219843   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:50.467523   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.967357   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:51.773424   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:53.775574   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.499839   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.997448   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:52.222442   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:54.719693   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:55.466751   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:57.966602   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.271805   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.272923   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.273306   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.998244   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:59.498440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:56.720684   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:53:58.729688   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.220861   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:00.466162   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.966846   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:02.773903   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.271747   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:01.995748   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:04.002522   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:03.723212   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.224289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:05.465907   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.466264   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:07.272960   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.274281   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:06.497442   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.997440   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:08.721146   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:10.724743   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:09.966368   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.966796   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.772305   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.772470   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:11.496229   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.497913   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:13.221912   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.722076   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:14.467708   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:16.965932   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.773481   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:17.774552   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.273733   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:15.998027   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.496453   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.497053   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.223289   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:20.722234   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:18.966869   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:21.465921   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:23.466328   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.272550   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.497084   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:24.498177   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:22.727882   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.221485   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:25.966388   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.466553   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.772616   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.773188   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:26.997209   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:28.997776   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:27.721711   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:29.722528   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:30.964854   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.966383   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.272612   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.275600   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:31.498601   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:33.997450   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:32.220641   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:34.222232   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.476491   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.968512   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.772248   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.272991   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:35.997574   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:37.999016   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.501116   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:36.723179   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:39.220182   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:40.469607   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.968860   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.274044   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.772706   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:42.502208   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:44.997516   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:41.720811   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:43.721757   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.725689   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.466766   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.966704   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:45.773511   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.273161   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.274031   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:47.497342   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:49.502501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:48.223549   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.719890   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:50.465849   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.466157   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.772748   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:55.272781   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:51.997636   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.499333   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:52.720512   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.721826   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:54.466519   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.466580   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.274370   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.774179   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:56.997654   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.497915   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:57.221713   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:59.723015   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:54:58.965289   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:00.966027   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.967557   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:02.273349   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.773101   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.996491   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:03.996649   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:01.723123   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:04.220986   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.224736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.466592   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.966611   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:06.773180   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.774008   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:05.997589   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:07.998076   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.001226   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:08.720517   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.221172   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:10.466096   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.467200   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:11.272981   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.773210   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:12.496043   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.497518   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:13.725751   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.219939   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:14.966795   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:17.466501   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.272578   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.273500   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:16.997861   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.499434   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:18.221058   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.720978   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:19.466641   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.965389   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:20.772109   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.274633   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:21.997800   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:24.497501   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.220292   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.722738   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:23.966366   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.966799   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.465341   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:25.773108   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:27.774236   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.274971   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:26.997610   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.997753   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:28.220185   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.220399   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:30.466026   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.966220   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.772859   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:35.272898   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:31.497899   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:33.500772   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:32.220696   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.221098   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.222701   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:34.966787   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.465676   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:37.775190   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.272006   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:36.000539   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.497044   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:38.720509   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.730400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:39.468063   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:41.966415   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:42.276412   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.772916   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:40.996937   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.496928   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:43.220575   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.724283   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:44.465646   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.467000   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:46.773090   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.273675   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:45.997477   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:47.997959   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:49.998126   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.220758   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:50.720911   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:48.966711   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.468554   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:51.772710   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.773277   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:52.501489   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:54.996998   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.221047   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.221493   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:53.965841   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:55.965891   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.465977   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.272446   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:58.772269   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:56.997565   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.496443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:57.722571   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:55:59.724736   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.466069   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.966747   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:00.772715   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.271368   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.274084   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:01.498102   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:03.498428   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:02.220645   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.720012   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:04.966850   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.467719   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:07.772997   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.273279   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:05.998642   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:08.001018   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:10.496939   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:06.721938   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.219709   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:11.220579   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:09.968249   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.465039   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.773538   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.272696   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:12.500855   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.996837   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:13.725252   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:15.725522   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:14.465989   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:16.966908   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.273749   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.772650   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:17.496107   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.496914   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:18.224365   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:20.720429   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:19.465513   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.967092   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.775353   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:24.277586   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:21.498047   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.999733   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.219319   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:25.222340   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:23.967374   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.465973   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.468481   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.772514   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.774642   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:26.496794   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:28.498446   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:27.723499   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.222748   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.965650   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.967183   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.777450   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:33.276381   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:30.999443   46866 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:32.384081   46866 pod_ready.go:81] duration metric: took 4m0.000635015s waiting for pod "metrics-server-57f55c9bc5-mhxpn" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:32.384115   46866 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:32.384132   46866 pod_ready.go:38] duration metric: took 4m11.062812404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:32.384156   46866 kubeadm.go:640] restartCluster took 4m30.437260197s
	W1205 20:56:32.384250   46866 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:32.384280   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:32.721610   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.220186   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.467452   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.966451   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:35.773516   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.773737   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.273185   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:37.221794   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:39.722400   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:40.466005   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.467531   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:42.773790   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:45.272396   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:41.722481   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.734080   47365 pod_ready.go:102] pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:43.912982   47365 pod_ready.go:81] duration metric: took 4m0.000982583s waiting for pod "metrics-server-57f55c9bc5-676m6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:43.913024   47365 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:43.913038   47365 pod_ready.go:38] duration metric: took 4m4.801748698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:43.913063   47365 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:56:43.913101   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:43.913175   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:43.965196   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:43.965220   47365 cri.go:89] found id: ""
	I1205 20:56:43.965228   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:43.965272   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:43.970257   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:43.970353   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:44.026974   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.027005   47365 cri.go:89] found id: ""
	I1205 20:56:44.027015   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:44.027099   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.032107   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:44.032212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:44.075721   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:44.075758   47365 cri.go:89] found id: ""
	I1205 20:56:44.075766   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:44.075823   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.082125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:44.082212   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:44.125099   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:44.125122   47365 cri.go:89] found id: ""
	I1205 20:56:44.125129   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:44.125171   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.129477   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:44.129538   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:44.180281   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.180305   47365 cri.go:89] found id: ""
	I1205 20:56:44.180313   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:44.180357   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.185094   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:44.185173   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:44.228693   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.228719   47365 cri.go:89] found id: ""
	I1205 20:56:44.228730   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:44.228786   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.233574   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:44.233687   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:44.279286   47365 cri.go:89] found id: ""
	I1205 20:56:44.279312   47365 logs.go:284] 0 containers: []
	W1205 20:56:44.279321   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:44.279328   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:44.279390   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:44.333572   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.333598   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:44.333605   47365 cri.go:89] found id: ""
	I1205 20:56:44.333614   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:44.333678   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.339080   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:44.343653   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:44.343687   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:44.412744   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:44.412785   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:44.457374   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:44.457402   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:44.521640   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:44.521676   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:44.536612   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:44.536636   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:44.586795   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:44.586836   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:45.065254   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:45.065293   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:45.126209   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:45.126242   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:45.166553   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:45.166580   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:45.214849   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:45.214887   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:45.371687   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:45.371732   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:45.417585   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:45.417615   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:45.455524   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:45.455559   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:44.965462   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.967433   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:47.272958   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.274398   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:46.621173   46866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.236869123s)
	I1205 20:56:46.621264   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:46.636086   46866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:46.647003   46866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:46.657201   46866 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:46.657241   46866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:56:46.882231   46866 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:48.007463   47365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:56:48.023675   47365 api_server.go:72] duration metric: took 4m15.243410399s to wait for apiserver process to appear ...
	I1205 20:56:48.023713   47365 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:56:48.023748   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:48.023818   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:48.067278   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.067301   47365 cri.go:89] found id: ""
	I1205 20:56:48.067308   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:48.067359   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.072370   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:48.072446   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:48.118421   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:48.118444   47365 cri.go:89] found id: ""
	I1205 20:56:48.118453   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:48.118509   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.123954   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:48.124019   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:48.173864   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:48.173890   47365 cri.go:89] found id: ""
	I1205 20:56:48.173900   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:48.173955   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.178717   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:48.178790   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:48.221891   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:48.221915   47365 cri.go:89] found id: ""
	I1205 20:56:48.221924   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:48.221985   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.226811   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:48.226886   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:48.271431   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:48.271454   47365 cri.go:89] found id: ""
	I1205 20:56:48.271463   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:48.271518   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.276572   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:48.276655   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:48.326438   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:48.326466   47365 cri.go:89] found id: ""
	I1205 20:56:48.326476   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:48.326534   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.334539   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:48.334611   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:48.377929   47365 cri.go:89] found id: ""
	I1205 20:56:48.377955   47365 logs.go:284] 0 containers: []
	W1205 20:56:48.377965   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:48.377973   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:48.378035   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:48.430599   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:48.430621   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:48.430629   47365 cri.go:89] found id: ""
	I1205 20:56:48.430638   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:48.430691   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.434882   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:48.439269   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:48.439299   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:48.495069   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:48.495113   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:48.955220   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:48.955257   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:48.971222   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:48.971246   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:49.108437   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:49.108470   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:49.150916   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:49.150940   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:49.207092   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:49.207141   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:49.251940   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:49.251969   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:49.293885   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:49.293918   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:49.349151   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:49.349187   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:49.403042   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:49.403079   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:49.466816   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:49.466858   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:49.525300   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:49.525341   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:49.467873   46700 pod_ready.go:102] pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:49.659950   46700 pod_ready.go:81] duration metric: took 4m0.000950283s waiting for pod "metrics-server-74d5856cc6-pt8v6" in "kube-system" namespace to be "Ready" ...
	E1205 20:56:49.659985   46700 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:56:49.660008   46700 pod_ready.go:38] duration metric: took 4m1.200539602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:56:49.660056   46700 kubeadm.go:640] restartCluster took 5m17.548124184s
	W1205 20:56:49.660130   46700 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:56:49.660162   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:56:51.776117   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:54.275521   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:52.099610   47365 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8444/healthz ...
	I1205 20:56:52.106838   47365 api_server.go:279] https://192.168.39.27:8444/healthz returned 200:
	ok
	I1205 20:56:52.109813   47365 api_server.go:141] control plane version: v1.28.4
	I1205 20:56:52.109835   47365 api_server.go:131] duration metric: took 4.086114093s to wait for apiserver health ...
	I1205 20:56:52.109845   47365 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:56:52.109874   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 20:56:52.109929   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 20:56:52.155290   47365 cri.go:89] found id: "fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:52.155319   47365 cri.go:89] found id: ""
	I1205 20:56:52.155328   47365 logs.go:284] 1 containers: [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883]
	I1205 20:56:52.155382   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.160069   47365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 20:56:52.160137   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 20:56:52.197857   47365 cri.go:89] found id: "1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.197885   47365 cri.go:89] found id: ""
	I1205 20:56:52.197894   47365 logs.go:284] 1 containers: [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3]
	I1205 20:56:52.197956   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.203012   47365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 20:56:52.203075   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 20:56:52.257881   47365 cri.go:89] found id: "95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.257904   47365 cri.go:89] found id: ""
	I1205 20:56:52.257914   47365 logs.go:284] 1 containers: [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc]
	I1205 20:56:52.257972   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.264817   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 20:56:52.264899   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 20:56:52.313302   47365 cri.go:89] found id: "e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.313331   47365 cri.go:89] found id: ""
	I1205 20:56:52.313341   47365 logs.go:284] 1 containers: [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb]
	I1205 20:56:52.313398   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.318864   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 20:56:52.318972   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 20:56:52.389306   47365 cri.go:89] found id: "15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.389333   47365 cri.go:89] found id: ""
	I1205 20:56:52.389342   47365 logs.go:284] 1 containers: [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d]
	I1205 20:56:52.389400   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.406125   47365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 20:56:52.406194   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 20:56:52.458735   47365 cri.go:89] found id: "fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:52.458760   47365 cri.go:89] found id: ""
	I1205 20:56:52.458770   47365 logs.go:284] 1 containers: [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa]
	I1205 20:56:52.458821   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.463571   47365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 20:56:52.463642   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 20:56:52.529035   47365 cri.go:89] found id: ""
	I1205 20:56:52.529067   47365 logs.go:284] 0 containers: []
	W1205 20:56:52.529079   47365 logs.go:286] No container was found matching "kindnet"
	I1205 20:56:52.529088   47365 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 20:56:52.529157   47365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 20:56:52.583543   47365 cri.go:89] found id: "2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:52.583578   47365 cri.go:89] found id: "6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.583585   47365 cri.go:89] found id: ""
	I1205 20:56:52.583594   47365 logs.go:284] 2 containers: [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2]
	I1205 20:56:52.583649   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.589299   47365 ssh_runner.go:195] Run: which crictl
	I1205 20:56:52.595000   47365 logs.go:123] Gathering logs for etcd [1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3] ...
	I1205 20:56:52.595024   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eed3a831d6e90461fec6a0beee6a74145fd33b9949b3a95078779beec57ecc3"
	I1205 20:56:52.671447   47365 logs.go:123] Gathering logs for storage-provisioner [6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2] ...
	I1205 20:56:52.671487   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c766515e85b4744bfb4c05f152457f4532dcba856bf007da6a5986894df7ea2"
	I1205 20:56:52.719185   47365 logs.go:123] Gathering logs for kubelet ...
	I1205 20:56:52.719223   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 20:56:52.780173   47365 logs.go:123] Gathering logs for coredns [95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc] ...
	I1205 20:56:52.780203   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95dae582422a9a9e93dbea847522b15eb0e687a63f032990d0c6cf213dfb6dbc"
	I1205 20:56:52.823808   47365 logs.go:123] Gathering logs for kube-scheduler [e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb] ...
	I1205 20:56:52.823843   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0198751714305caf118f3fa3afd539a468adcc166292fe67e477e140dd00dcb"
	I1205 20:56:52.874394   47365 logs.go:123] Gathering logs for container status ...
	I1205 20:56:52.874428   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 20:56:52.938139   47365 logs.go:123] Gathering logs for kube-proxy [15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d] ...
	I1205 20:56:52.938177   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15eee849957817fe847743f51dcb508fb657101a60f69e7754fff4f028defe4d"
	I1205 20:56:52.982386   47365 logs.go:123] Gathering logs for storage-provisioner [2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3] ...
	I1205 20:56:52.982414   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a816a407fb68ed24a3df5f3540e59fc6c779594bb763d8f05968de05cabdfd3"
	I1205 20:56:53.029082   47365 logs.go:123] Gathering logs for CRI-O ...
	I1205 20:56:53.029111   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 20:56:53.447057   47365 logs.go:123] Gathering logs for dmesg ...
	I1205 20:56:53.447099   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 20:56:53.465029   47365 logs.go:123] Gathering logs for describe nodes ...
	I1205 20:56:53.465066   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 20:56:53.627351   47365 logs.go:123] Gathering logs for kube-apiserver [fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883] ...
	I1205 20:56:53.627400   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fad43ea2e090b499f65fc454968a2e7ed884f59e4d4f55c03f5a7590b8d5e883"
	I1205 20:56:53.694357   47365 logs.go:123] Gathering logs for kube-controller-manager [fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa] ...
	I1205 20:56:53.694393   47365 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fa3b51839f01217bc9dfbeb87f9daac417805d5e8377b16f677dcbcf49e987aa"
	I1205 20:56:56.267579   47365 system_pods.go:59] 8 kube-system pods found
	I1205 20:56:56.267614   47365 system_pods.go:61] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.267624   47365 system_pods.go:61] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.267631   47365 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.267638   47365 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.267644   47365 system_pods.go:61] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.267650   47365 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.267660   47365 system_pods.go:61] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.267672   47365 system_pods.go:61] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.267683   47365 system_pods.go:74] duration metric: took 4.157830691s to wait for pod list to return data ...
	I1205 20:56:56.267696   47365 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:56:56.271148   47365 default_sa.go:45] found service account: "default"
	I1205 20:56:56.271170   47365 default_sa.go:55] duration metric: took 3.468435ms for default service account to be created ...
	I1205 20:56:56.271176   47365 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:56:56.277630   47365 system_pods.go:86] 8 kube-system pods found
	I1205 20:56:56.277654   47365 system_pods.go:89] "coredns-5dd5756b68-6pmzf" [69d0b16d-31bd-4db1-b165-ddbb870d5d48] Running
	I1205 20:56:56.277660   47365 system_pods.go:89] "etcd-default-k8s-diff-port-463614" [958c904f-eca7-450a-9b9d-cc608c302f96] Running
	I1205 20:56:56.277665   47365 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-463614" [b618e43a-5d1d-4317-a8e7-5db2ca5fdb4f] Running
	I1205 20:56:56.277669   47365 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-463614" [00c9b97f-182c-4f37-b7e4-8bf806a609d5] Running
	I1205 20:56:56.277674   47365 system_pods.go:89] "kube-proxy-g4zct" [49655fb8-d84f-4894-9fae-d606eb66ca04] Running
	I1205 20:56:56.277679   47365 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-463614" [cb2043b4-dccb-495b-91f4-b79e1e862792] Running
	I1205 20:56:56.277688   47365 system_pods.go:89] "metrics-server-57f55c9bc5-676m6" [dc304fd9-2922-42f7-b917-5618c6d43f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:56:56.277696   47365 system_pods.go:89] "storage-provisioner" [8662a670-097a-47a4-8839-b65bd104c45a] Running
	I1205 20:56:56.277715   47365 system_pods.go:126] duration metric: took 6.533492ms to wait for k8s-apps to be running ...
	I1205 20:56:56.277726   47365 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:56:56.277772   47365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:56.296846   47365 system_svc.go:56] duration metric: took 19.109991ms WaitForService to wait for kubelet.
	I1205 20:56:56.296877   47365 kubeadm.go:581] duration metric: took 4m23.516618576s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:56:56.296902   47365 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:56:56.301504   47365 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:56:56.301530   47365 node_conditions.go:123] node cpu capacity is 2
	I1205 20:56:56.301542   47365 node_conditions.go:105] duration metric: took 4.634882ms to run NodePressure ...
	I1205 20:56:56.301552   47365 start.go:228] waiting for startup goroutines ...
	I1205 20:56:56.301560   47365 start.go:233] waiting for cluster config update ...
	I1205 20:56:56.301573   47365 start.go:242] writing updated cluster config ...
	I1205 20:56:56.301859   47365 ssh_runner.go:195] Run: rm -f paused
	I1205 20:56:56.357189   47365 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:56:56.358798   47365 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-463614" cluster and "default" namespace by default
	I1205 20:56:54.756702   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.096505481s)
	I1205 20:56:54.756786   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:56:54.774684   46700 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:56:54.786308   46700 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:56:54.796762   46700 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:56:54.796809   46700 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1205 20:56:55.081318   46700 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:56:58.569752   46866 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.1
	I1205 20:56:58.569873   46866 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:56:58.569988   46866 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:56:58.570119   46866 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:56:58.570261   46866 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:56:58.570368   46866 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:56:58.572785   46866 out.go:204]   - Generating certificates and keys ...
	I1205 20:56:58.573020   46866 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:56:58.573232   46866 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:56:58.573410   46866 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:56:58.573510   46866 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:56:58.573717   46866 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:56:58.573868   46866 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:56:58.574057   46866 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:56:58.574229   46866 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:56:58.574517   46866 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:56:58.574760   46866 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:56:58.574903   46866 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:56:58.575070   46866 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:56:58.575205   46866 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:56:58.575363   46866 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:56:58.575515   46866 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:56:58.575600   46866 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:56:58.575799   46866 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:56:58.576083   46866 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:56:58.576320   46866 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:56:58.580654   46866 out.go:204]   - Booting up control plane ...
	I1205 20:56:58.581337   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:56:58.581851   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:56:58.582029   46866 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:56:58.582667   46866 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:56:58.582988   46866 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:56:58.583126   46866 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:56:58.583631   46866 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:56:58.583908   46866 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502137 seconds
	I1205 20:56:58.584157   46866 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:56:58.584637   46866 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:56:58.584882   46866 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:56:58.585370   46866 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-143651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:56:58.585492   46866 kubeadm.go:322] [bootstrap-token] Using token: fap3k3.pr3uz4d90n7oyvds
	I1205 20:56:58.590063   46866 out.go:204]   - Configuring RBAC rules ...
	I1205 20:56:58.590356   46866 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:56:58.590482   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:56:58.590692   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:56:58.590887   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:56:58.591031   46866 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:56:58.591131   46866 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:56:58.591269   46866 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:56:58.591323   46866 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:56:58.591378   46866 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:56:58.591383   46866 kubeadm.go:322] 
	I1205 20:56:58.591455   46866 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:56:58.591462   46866 kubeadm.go:322] 
	I1205 20:56:58.591554   46866 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:56:58.591559   46866 kubeadm.go:322] 
	I1205 20:56:58.591590   46866 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:56:58.591659   46866 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:56:58.591719   46866 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:56:58.591724   46866 kubeadm.go:322] 
	I1205 20:56:58.591787   46866 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:56:58.591793   46866 kubeadm.go:322] 
	I1205 20:56:58.591848   46866 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:56:58.591853   46866 kubeadm.go:322] 
	I1205 20:56:58.591914   46866 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:56:58.592015   46866 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:56:58.592093   46866 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:56:58.592099   46866 kubeadm.go:322] 
	I1205 20:56:58.592197   46866 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:56:58.592300   46866 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:56:58.592306   46866 kubeadm.go:322] 
	I1205 20:56:58.592403   46866 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592525   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:56:58.592550   46866 kubeadm.go:322] 	--control-plane 
	I1205 20:56:58.592558   46866 kubeadm.go:322] 
	I1205 20:56:58.592645   46866 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:56:58.592650   46866 kubeadm.go:322] 
	I1205 20:56:58.592743   46866 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fap3k3.pr3uz4d90n7oyvds \
	I1205 20:56:58.592870   46866 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:56:58.592880   46866 cni.go:84] Creating CNI manager for ""
	I1205 20:56:58.592889   46866 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:56:58.594456   46866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:56:56.773764   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.778395   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:56:58.595862   46866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:56:58.625177   46866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:56:58.683896   46866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:56:58.683977   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.684060   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=no-preload-143651 minikube.k8s.io/updated_at=2023_12_05T20_56_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:58.741242   46866 ops.go:34] apiserver oom_adj: -16
	I1205 20:56:59.114129   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.238212   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:56:59.869086   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:00.368538   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.272299   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:03.272604   46374 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:04.992619   46374 pod_ready.go:81] duration metric: took 4m0.000553964s waiting for pod "metrics-server-57f55c9bc5-bcg28" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:04.992652   46374 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 20:57:04.992691   46374 pod_ready.go:38] duration metric: took 4m13.609186276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:04.992726   46374 kubeadm.go:640] restartCluster took 4m33.586092425s
	W1205 20:57:04.992782   46374 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1205 20:57:04.992808   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 20:57:00.868500   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.369084   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:01.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.368409   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:02.869341   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.368765   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:03.869054   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.368855   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:04.869144   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:05.368635   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.047040   46700 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1205 20:57:09.047132   46700 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:09.047236   46700 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:09.047350   46700 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:09.047462   46700 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:09.047583   46700 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:09.047693   46700 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:09.047752   46700 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1205 20:57:09.047825   46700 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:09.049606   46700 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:09.049706   46700 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:09.049802   46700 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:09.049885   46700 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:09.049963   46700 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:09.050058   46700 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:09.050148   46700 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:09.050235   46700 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:09.050350   46700 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:09.050468   46700 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:09.050563   46700 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:09.050627   46700 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:09.050732   46700 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:09.050817   46700 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:09.050897   46700 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:09.050997   46700 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:09.051080   46700 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:09.051165   46700 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:09.052610   46700 out.go:204]   - Booting up control plane ...
	I1205 20:57:09.052722   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:09.052806   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:09.052870   46700 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:09.052965   46700 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:09.053103   46700 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:09.053203   46700 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.005642 seconds
	I1205 20:57:09.053354   46700 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:09.053514   46700 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:09.053563   46700 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:09.053701   46700 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-061206 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1205 20:57:09.053783   46700 kubeadm.go:322] [bootstrap-token] Using token: syik3l.i77juzhd1iybx3my
	I1205 20:57:09.055286   46700 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:09.055409   46700 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:09.055599   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:09.055749   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:09.055862   46700 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:09.055982   46700 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:09.056043   46700 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:09.056106   46700 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:09.056116   46700 kubeadm.go:322] 
	I1205 20:57:09.056197   46700 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:09.056207   46700 kubeadm.go:322] 
	I1205 20:57:09.056307   46700 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:09.056329   46700 kubeadm.go:322] 
	I1205 20:57:09.056377   46700 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:09.056456   46700 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:09.056533   46700 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:09.056540   46700 kubeadm.go:322] 
	I1205 20:57:09.056600   46700 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:09.056669   46700 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:09.056729   46700 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:09.056737   46700 kubeadm.go:322] 
	I1205 20:57:09.056804   46700 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1205 20:57:09.056868   46700 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:09.056874   46700 kubeadm.go:322] 
	I1205 20:57:09.056944   46700 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057093   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:09.057135   46700 kubeadm.go:322]     --control-plane 	  
	I1205 20:57:09.057150   46700 kubeadm.go:322] 
	I1205 20:57:09.057252   46700 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:09.057260   46700 kubeadm.go:322] 
	I1205 20:57:09.057360   46700 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token syik3l.i77juzhd1iybx3my \
	I1205 20:57:09.057502   46700 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:09.057514   46700 cni.go:84] Creating CNI manager for ""
	I1205 20:57:09.057520   46700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:09.058762   46700 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:05.869166   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.368434   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:06.869228   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.369175   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:07.868933   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.369028   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:08.868920   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.369223   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.869130   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.369240   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.869318   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.369189   46866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.576975   46866 kubeadm.go:1088] duration metric: took 12.893071134s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:11.577015   46866 kubeadm.go:406] StartCluster complete in 5m9.690903424s
	I1205 20:57:11.577039   46866 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.577129   46866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:11.579783   46866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:11.580131   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:11.580364   46866 config.go:182] Loaded profile config "no-preload-143651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 20:57:11.580360   46866 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:11.580446   46866 addons.go:69] Setting storage-provisioner=true in profile "no-preload-143651"
	I1205 20:57:11.580467   46866 addons.go:231] Setting addon storage-provisioner=true in "no-preload-143651"
	W1205 20:57:11.580479   46866 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:11.580518   46866 addons.go:69] Setting metrics-server=true in profile "no-preload-143651"
	I1205 20:57:11.580535   46866 addons.go:231] Setting addon metrics-server=true in "no-preload-143651"
	W1205 20:57:11.580544   46866 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:11.580575   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580583   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.580982   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580994   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.580497   46866 addons.go:69] Setting default-storageclass=true in profile "no-preload-143651"
	I1205 20:57:11.581018   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581027   46866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-143651"
	I1205 20:57:11.581303   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.581357   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.581383   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.600887   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I1205 20:57:11.600886   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I1205 20:57:11.601552   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601681   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.601760   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I1205 20:57:11.602152   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602177   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602260   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.602348   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.602370   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.602603   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602719   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.602806   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.602996   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.603020   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.603329   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.603379   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.603477   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.603997   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.604040   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.606962   46866 addons.go:231] Setting addon default-storageclass=true in "no-preload-143651"
	W1205 20:57:11.606986   46866 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:11.607009   46866 host.go:66] Checking if "no-preload-143651" exists ...
	I1205 20:57:11.607331   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.607363   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.624885   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
	I1205 20:57:11.625358   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.625857   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.625869   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.626331   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.626627   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I1205 20:57:11.626832   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.627179   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.631282   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I1205 20:57:11.632431   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.632516   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.632599   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.632763   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.633113   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.633639   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.633883   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.634495   46866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:11.634539   46866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:11.634823   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.637060   46866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:11.635196   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.641902   46866 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:11.641932   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:11.641960   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.642616   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.644862   46866 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:11.647090   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:11.647113   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:11.647134   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.646852   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647539   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.647564   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.647755   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.648063   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.648295   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.648520   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.654458   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.654493   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654522   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.654556   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.654801   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.655015   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.655247   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.661244   46866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I1205 20:57:11.661886   46866 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:11.662508   46866 main.go:141] libmachine: Using API Version  1
	I1205 20:57:11.662534   46866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:11.663651   46866 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:11.663907   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetState
	I1205 20:57:11.666067   46866 main.go:141] libmachine: (no-preload-143651) Calling .DriverName
	I1205 20:57:11.666501   46866 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.666523   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:11.666543   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHHostname
	I1205 20:57:11.669659   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670106   46866 main.go:141] libmachine: (no-preload-143651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:09:28", ip: ""} in network mk-no-preload-143651: {Iface:virbr3 ExpiryTime:2023-12-05 21:51:35 +0000 UTC Type:0 Mac:52:54:00:2e:09:28 Iaid: IPaddr:192.168.61.162 Prefix:24 Hostname:no-preload-143651 Clientid:01:52:54:00:2e:09:28}
	I1205 20:57:11.670132   46866 main.go:141] libmachine: (no-preload-143651) DBG | domain no-preload-143651 has defined IP address 192.168.61.162 and MAC address 52:54:00:2e:09:28 in network mk-no-preload-143651
	I1205 20:57:11.670479   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHPort
	I1205 20:57:11.670673   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHKeyPath
	I1205 20:57:11.670802   46866 main.go:141] libmachine: (no-preload-143651) Calling .GetSSHUsername
	I1205 20:57:11.670915   46866 sshutil.go:53] new ssh client: &{IP:192.168.61.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/no-preload-143651/id_rsa Username:docker}
	I1205 20:57:11.816687   46866 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-143651" context rescaled to 1 replicas
	I1205 20:57:11.816742   46866 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.162 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:11.820014   46866 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:09.060305   46700 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:09.069861   46700 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:09.093691   46700 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:09.093847   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.093914   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=old-k8s-version-061206 minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.123857   46700 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:09.315555   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:09.435904   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.049845   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:10.549703   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.049931   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.549848   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.049776   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:12.549841   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.050053   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:13.549531   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:11.821903   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:11.831116   46866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:11.867528   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:11.969463   46866 node_ready.go:35] waiting up to 6m0s for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:11.976207   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:11.976235   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:11.977230   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:12.003110   46866 node_ready.go:49] node "no-preload-143651" has status "Ready":"True"
	I1205 20:57:12.003132   46866 node_ready.go:38] duration metric: took 33.629273ms waiting for node "no-preload-143651" to be "Ready" ...
	I1205 20:57:12.003142   46866 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:12.053173   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:12.053208   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:12.140411   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:12.170492   46866 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.170521   46866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:12.251096   46866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:12.778963   46866 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:12.779026   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779040   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779377   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779402   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.779411   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.779411   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.779418   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.779625   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.779665   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:12.786021   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:12.786045   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:12.786331   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:12.786380   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:12.786400   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194477   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217217088s)
	I1205 20:57:13.194529   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194543   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.194883   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.194929   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.194948   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.194960   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.194970   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.195198   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.195212   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562441   46866 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.311301688s)
	I1205 20:57:13.562496   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562512   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.562826   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.562845   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.562856   46866 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:13.562865   46866 main.go:141] libmachine: (no-preload-143651) Calling .Close
	I1205 20:57:13.563115   46866 main.go:141] libmachine: (no-preload-143651) DBG | Closing plugin on server side
	I1205 20:57:13.563164   46866 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:13.563177   46866 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:13.563190   46866 addons.go:467] Verifying addon metrics-server=true in "no-preload-143651"
	I1205 20:57:13.564940   46866 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:13.566316   46866 addons.go:502] enable addons completed in 1.985974766s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:14.389400   46866 pod_ready.go:102] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:15.388445   46866 pod_ready.go:92] pod "coredns-76f75df574-4n2wg" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.388478   46866 pod_ready.go:81] duration metric: took 3.248030471s waiting for pod "coredns-76f75df574-4n2wg" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.388493   46866 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.391728   46866 pod_ready.go:97] error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391759   46866 pod_ready.go:81] duration metric: took 3.251498ms waiting for pod "coredns-76f75df574-sfnmr" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:15.391772   46866 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-sfnmr" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-sfnmr" not found
	I1205 20:57:15.391781   46866 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399725   46866 pod_ready.go:92] pod "etcd-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.399745   46866 pod_ready.go:81] duration metric: took 7.956804ms waiting for pod "etcd-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.399759   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407412   46866 pod_ready.go:92] pod "kube-apiserver-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.407436   46866 pod_ready.go:81] duration metric: took 7.672123ms waiting for pod "kube-apiserver-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.407446   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414249   46866 pod_ready.go:92] pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.414295   46866 pod_ready.go:81] duration metric: took 6.840313ms waiting for pod "kube-controller-manager-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.414309   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587237   46866 pod_ready.go:92] pod "kube-proxy-6txsz" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.587271   46866 pod_ready.go:81] duration metric: took 172.95478ms waiting for pod "kube-proxy-6txsz" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.587286   46866 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985901   46866 pod_ready.go:92] pod "kube-scheduler-no-preload-143651" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:15.985930   46866 pod_ready.go:81] duration metric: took 398.634222ms waiting for pod "kube-scheduler-no-preload-143651" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:15.985943   46866 pod_ready.go:38] duration metric: took 3.982790764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:15.985960   46866 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:15.986019   46866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:16.009052   46866 api_server.go:72] duration metric: took 4.192253908s to wait for apiserver process to appear ...
	I1205 20:57:16.009082   46866 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:16.009100   46866 api_server.go:253] Checking apiserver healthz at https://192.168.61.162:8443/healthz ...
	I1205 20:57:16.014689   46866 api_server.go:279] https://192.168.61.162:8443/healthz returned 200:
	ok
	I1205 20:57:16.015758   46866 api_server.go:141] control plane version: v1.29.0-rc.1
	I1205 20:57:16.015781   46866 api_server.go:131] duration metric: took 6.691652ms to wait for apiserver health ...
	I1205 20:57:16.015791   46866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:16.188198   46866 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:16.188232   46866 system_pods.go:61] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.188240   46866 system_pods.go:61] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.188246   46866 system_pods.go:61] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.188254   46866 system_pods.go:61] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.188261   46866 system_pods.go:61] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.188267   46866 system_pods.go:61] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.188279   46866 system_pods.go:61] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.188290   46866 system_pods.go:61] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.188301   46866 system_pods.go:74] duration metric: took 172.503422ms to wait for pod list to return data ...
	I1205 20:57:16.188311   46866 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:16.384722   46866 default_sa.go:45] found service account: "default"
	I1205 20:57:16.384759   46866 default_sa.go:55] duration metric: took 196.435091ms for default service account to be created ...
	I1205 20:57:16.384769   46866 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:16.587515   46866 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:16.587542   46866 system_pods.go:89] "coredns-76f75df574-4n2wg" [8a90349b-f4fa-413d-b2fb-8672988095af] Running
	I1205 20:57:16.587547   46866 system_pods.go:89] "etcd-no-preload-143651" [fbf4b620-6012-4aa0-a5dc-97a5e4fcf247] Running
	I1205 20:57:16.587554   46866 system_pods.go:89] "kube-apiserver-no-preload-143651" [bcb11485-2252-4a6f-bb0c-70bdffbd5dbf] Running
	I1205 20:57:16.587561   46866 system_pods.go:89] "kube-controller-manager-no-preload-143651" [87561125-13e6-4485-a938-e13415050be5] Running
	I1205 20:57:16.587567   46866 system_pods.go:89] "kube-proxy-6txsz" [ce2eae51-b812-4cde-a012-1d0b53607ba4] Running
	I1205 20:57:16.587574   46866 system_pods.go:89] "kube-scheduler-no-preload-143651" [5432ed83-2144-4f04-bfe8-418d1a8e122f] Running
	I1205 20:57:16.587585   46866 system_pods.go:89] "metrics-server-57f55c9bc5-xwfpm" [76fbd532-715f-49fd-942d-33a312fb566c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:16.587593   46866 system_pods.go:89] "storage-provisioner" [70819185-f661-434d-b039-e8b822dbc886] Running
	I1205 20:57:16.587604   46866 system_pods.go:126] duration metric: took 202.829744ms to wait for k8s-apps to be running ...
	I1205 20:57:16.587613   46866 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:16.587654   46866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:16.602489   46866 system_svc.go:56] duration metric: took 14.864421ms WaitForService to wait for kubelet.
	I1205 20:57:16.602521   46866 kubeadm.go:581] duration metric: took 4.785728725s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:16.602545   46866 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:16.785610   46866 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:16.785646   46866 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:16.785663   46866 node_conditions.go:105] duration metric: took 183.112914ms to run NodePressure ...
	I1205 20:57:16.785677   46866 start.go:228] waiting for startup goroutines ...
	I1205 20:57:16.785686   46866 start.go:233] waiting for cluster config update ...
	I1205 20:57:16.785705   46866 start.go:242] writing updated cluster config ...
	I1205 20:57:16.786062   46866 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:16.840981   46866 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.1 (minor skew: 1)
	I1205 20:57:16.842980   46866 out.go:177] * Done! kubectl is now configured to use "no-preload-143651" cluster and "default" namespace by default
	I1205 20:57:14.049305   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:14.549423   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.050061   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:15.550221   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.049450   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:16.550094   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.049900   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:17.549923   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.050255   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:18.549399   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.615362   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.62253521s)
	I1205 20:57:19.615425   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:19.633203   46374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:57:19.643629   46374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:57:19.653655   46374 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:57:19.653717   46374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:57:19.709748   46374 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1205 20:57:19.709836   46374 kubeadm.go:322] [preflight] Running pre-flight checks
	I1205 20:57:19.887985   46374 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:57:19.888143   46374 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:57:19.888243   46374 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 20:57:20.145182   46374 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:57:20.147189   46374 out.go:204]   - Generating certificates and keys ...
	I1205 20:57:20.147319   46374 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1205 20:57:20.147389   46374 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1205 20:57:20.147482   46374 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 20:57:20.147875   46374 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1205 20:57:20.148583   46374 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 20:57:20.149486   46374 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1205 20:57:20.150362   46374 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1205 20:57:20.150974   46374 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1205 20:57:20.151523   46374 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 20:57:20.152166   46374 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 20:57:20.152419   46374 kubeadm.go:322] [certs] Using the existing "sa" key
	I1205 20:57:20.152504   46374 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:57:20.435395   46374 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:57:20.606951   46374 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:57:20.754435   46374 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:57:20.953360   46374 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:57:20.954288   46374 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:57:20.958413   46374 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:57:19.049689   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:19.549608   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.049856   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:20.550245   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.050001   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:21.549839   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.049908   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:22.549764   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.050204   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:23.550196   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.049420   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:24.550152   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.050103   46700 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:25.202067   46700 kubeadm.go:1088] duration metric: took 16.108268519s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:25.202100   46700 kubeadm.go:406] StartCluster complete in 5m53.142100786s
	I1205 20:57:25.202121   46700 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.202211   46700 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:25.204920   46700 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:25.205284   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:25.205635   46700 config.go:182] Loaded profile config "old-k8s-version-061206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1205 20:57:25.205792   46700 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:25.205865   46700 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-061206"
	I1205 20:57:25.205888   46700 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-061206"
	W1205 20:57:25.205896   46700 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:25.205954   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.205982   46700 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206011   46700 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-061206"
	I1205 20:57:25.206429   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206436   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206457   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206459   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.206517   46700 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-061206"
	I1205 20:57:25.206531   46700 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-061206"
	W1205 20:57:25.206538   46700 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:25.206578   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.206906   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.206936   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.228876   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I1205 20:57:25.228902   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1205 20:57:25.229036   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I1205 20:57:25.229487   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229569   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.229646   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.230209   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230230   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230413   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230426   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230468   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.230492   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.230851   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.231494   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.231520   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.231955   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.232544   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.232578   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.233084   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.233307   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.237634   46700 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-061206"
	W1205 20:57:25.237660   46700 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:25.237691   46700 host.go:66] Checking if "old-k8s-version-061206" exists ...
	I1205 20:57:25.238103   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.238138   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.252274   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45399
	I1205 20:57:25.252709   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.253307   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.253327   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.253689   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.253874   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.255891   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.258376   46700 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:25.256849   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I1205 20:57:25.260119   46700 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.260145   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:25.260168   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.261358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.262042   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.262063   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.262590   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.262765   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.265705   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.265905   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.267942   46700 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:25.266347   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.266528   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.269653   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.269661   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:25.269687   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:25.269708   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.270383   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.270602   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.270764   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.274415   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.274914   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.274939   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.275267   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.275451   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.275594   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.275736   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.282847   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I1205 20:57:25.283552   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.284174   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.284192   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.284659   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.285434   46700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:25.285469   46700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:25.306845   46700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1205 20:57:25.307358   46700 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:25.307884   46700 main.go:141] libmachine: Using API Version  1
	I1205 20:57:25.307905   46700 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:25.308302   46700 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:25.308605   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetState
	I1205 20:57:25.310363   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .DriverName
	I1205 20:57:25.310649   46700 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.310663   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:25.310682   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHHostname
	I1205 20:57:25.313904   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314451   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:f7:bc", ip: ""} in network mk-old-k8s-version-061206: {Iface:virbr2 ExpiryTime:2023-12-05 21:51:15 +0000 UTC Type:0 Mac:52:54:00:f9:f7:bc Iaid: IPaddr:192.168.50.116 Prefix:24 Hostname:old-k8s-version-061206 Clientid:01:52:54:00:f9:f7:bc}
	I1205 20:57:25.314482   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | domain old-k8s-version-061206 has defined IP address 192.168.50.116 and MAC address 52:54:00:f9:f7:bc in network mk-old-k8s-version-061206
	I1205 20:57:25.314756   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHPort
	I1205 20:57:25.314941   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHKeyPath
	I1205 20:57:25.315053   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .GetSSHUsername
	I1205 20:57:25.315153   46700 sshutil.go:53] new ssh client: &{IP:192.168.50.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/old-k8s-version-061206/id_rsa Username:docker}
	I1205 20:57:25.456874   46700 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-061206" context rescaled to 1 replicas
	I1205 20:57:25.456922   46700 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.116 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:25.459008   46700 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:20.960444   46374 out.go:204]   - Booting up control plane ...
	I1205 20:57:20.960603   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:57:20.960721   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:57:20.961220   46374 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:57:20.981073   46374 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:57:20.982383   46374 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:57:20.982504   46374 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1205 20:57:21.127167   46374 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 20:57:25.460495   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:25.531367   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:25.531600   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:25.531618   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:25.543589   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:25.624622   46700 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.624655   46700 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:25.660979   46700 node_ready.go:49] node "old-k8s-version-061206" has status "Ready":"True"
	I1205 20:57:25.661005   46700 node_ready.go:38] duration metric: took 36.286483ms waiting for node "old-k8s-version-061206" to be "Ready" ...
	I1205 20:57:25.661017   46700 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:25.666179   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:25.666208   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:25.796077   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:26.018114   46700 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.018141   46700 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:26.124357   46700 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:26.905138   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.37373154s)
	I1205 20:57:26.905210   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905229   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905526   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905553   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.905567   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.905576   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.905852   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:26.905905   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.905917   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964563   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:26.964593   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:26.964920   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:26.964940   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:26.964974   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465231   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.92160273s)
	I1205 20:57:27.465236   46700 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.840348969s)
	I1205 20:57:27.465312   46700 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:27.465289   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465379   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.465718   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.465761   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.465771   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.465780   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.465790   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.467788   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.467820   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.467829   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628166   46700 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.503702639s)
	I1205 20:57:27.628242   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628262   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628592   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628617   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628627   46700 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:27.628637   46700 main.go:141] libmachine: (old-k8s-version-061206) Calling .Close
	I1205 20:57:27.628714   46700 main.go:141] libmachine: (old-k8s-version-061206) DBG | Closing plugin on server side
	I1205 20:57:27.628851   46700 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:27.628866   46700 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:27.628885   46700 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-061206"
	I1205 20:57:27.632134   46700 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:27.634065   46700 addons.go:502] enable addons completed in 2.428270131s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:28.052082   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:29.630980   46374 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503524 seconds
	I1205 20:57:29.631109   46374 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:57:29.651107   46374 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:57:30.184174   46374 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:57:30.184401   46374 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-331495 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:57:30.703275   46374 kubeadm.go:322] [bootstrap-token] Using token: 28cbrl.nve3765a0enwbcr0
	I1205 20:57:30.705013   46374 out.go:204]   - Configuring RBAC rules ...
	I1205 20:57:30.705155   46374 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:57:30.718386   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:57:30.727275   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:57:30.734448   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:57:30.741266   46374 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:57:30.746706   46374 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:57:30.765198   46374 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:57:31.046194   46374 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1205 20:57:31.133417   46374 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1205 20:57:31.133438   46374 kubeadm.go:322] 
	I1205 20:57:31.133501   46374 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1205 20:57:31.133509   46374 kubeadm.go:322] 
	I1205 20:57:31.133647   46374 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1205 20:57:31.133667   46374 kubeadm.go:322] 
	I1205 20:57:31.133707   46374 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1205 20:57:31.133781   46374 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:57:31.133853   46374 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:57:31.133863   46374 kubeadm.go:322] 
	I1205 20:57:31.133918   46374 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1205 20:57:31.133925   46374 kubeadm.go:322] 
	I1205 20:57:31.133983   46374 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:57:31.133993   46374 kubeadm.go:322] 
	I1205 20:57:31.134042   46374 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1205 20:57:31.134103   46374 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:57:31.134262   46374 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:57:31.134300   46374 kubeadm.go:322] 
	I1205 20:57:31.134417   46374 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:57:31.134526   46374 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1205 20:57:31.134541   46374 kubeadm.go:322] 
	I1205 20:57:31.134671   46374 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.134823   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 \
	I1205 20:57:31.134858   46374 kubeadm.go:322] 	--control-plane 
	I1205 20:57:31.134867   46374 kubeadm.go:322] 
	I1205 20:57:31.134986   46374 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:57:31.134997   46374 kubeadm.go:322] 
	I1205 20:57:31.135114   46374 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 28cbrl.nve3765a0enwbcr0 \
	I1205 20:57:31.135272   46374 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:023523bd6b5a44f4c6b1b70dc35c2a2792e7b67e8127992054bef2c0a7a22e71 
	I1205 20:57:31.135908   46374 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:57:31.135934   46374 cni.go:84] Creating CNI manager for ""
	I1205 20:57:31.135944   46374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:57:31.137845   46374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:57:30.540402   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:33.040756   46700 pod_ready.go:102] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:31.139429   46374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:57:31.181897   46374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1205 20:57:31.202833   46374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:57:31.202901   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.202910   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728 minikube.k8s.io/name=embed-certs-331495 minikube.k8s.io/updated_at=2023_12_05T20_57_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.298252   46374 ops.go:34] apiserver oom_adj: -16
	I1205 20:57:31.569929   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:31.694250   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.294912   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:32.795323   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.295495   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:33.794998   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.294843   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.794730   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:35.295505   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:34.538542   46700 pod_ready.go:92] pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.538568   46700 pod_ready.go:81] duration metric: took 8.742457359s waiting for pod "coredns-5644d7b6d9-qm52j" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.538579   46700 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.540738   46700 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540763   46700 pod_ready.go:81] duration metric: took 2.177251ms waiting for pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace to be "Ready" ...
	E1205 20:57:34.540771   46700 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-vmt9k" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-vmt9k" not found
	I1205 20:57:34.540777   46700 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545336   46700 pod_ready.go:92] pod "kube-proxy-j68qr" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:34.545360   46700 pod_ready.go:81] duration metric: took 4.576584ms waiting for pod "kube-proxy-j68qr" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:34.545370   46700 pod_ready.go:38] duration metric: took 8.884340587s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:34.545387   46700 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:34.545442   46700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:34.561744   46700 api_server.go:72] duration metric: took 9.104792218s to wait for apiserver process to appear ...
	I1205 20:57:34.561769   46700 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:34.561786   46700 api_server.go:253] Checking apiserver healthz at https://192.168.50.116:8443/healthz ...
	I1205 20:57:34.568456   46700 api_server.go:279] https://192.168.50.116:8443/healthz returned 200:
	ok
	I1205 20:57:34.569584   46700 api_server.go:141] control plane version: v1.16.0
	I1205 20:57:34.569608   46700 api_server.go:131] duration metric: took 7.832231ms to wait for apiserver health ...
	I1205 20:57:34.569618   46700 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:34.573936   46700 system_pods.go:59] 4 kube-system pods found
	I1205 20:57:34.573962   46700 system_pods.go:61] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.573969   46700 system_pods.go:61] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.573979   46700 system_pods.go:61] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.573989   46700 system_pods.go:61] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.574004   46700 system_pods.go:74] duration metric: took 4.378461ms to wait for pod list to return data ...
	I1205 20:57:34.574016   46700 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:34.577236   46700 default_sa.go:45] found service account: "default"
	I1205 20:57:34.577258   46700 default_sa.go:55] duration metric: took 3.232577ms for default service account to be created ...
	I1205 20:57:34.577268   46700 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:34.581061   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.581080   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.581086   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.581093   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.581098   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.581112   46700 retry.go:31] will retry after 312.287284ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:34.898504   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:34.898531   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:34.898536   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:34.898545   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:34.898549   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:34.898563   46700 retry.go:31] will retry after 340.858289ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.244211   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.244237   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.244242   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.244249   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.244253   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.244267   46700 retry.go:31] will retry after 398.30611ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.649011   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:35.649042   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:35.649050   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:35.649061   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:35.649068   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:35.649086   46700 retry.go:31] will retry after 397.404602ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.052047   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.052079   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.052087   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.052097   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.052105   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.052124   46700 retry.go:31] will retry after 604.681853ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:36.662177   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:36.662206   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:36.662213   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:36.662223   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:36.662229   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:36.662247   46700 retry.go:31] will retry after 732.227215ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:37.399231   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:37.399264   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:37.399272   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:37.399282   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:37.399289   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:37.399308   46700 retry.go:31] will retry after 1.17612773s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:35.795241   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.295081   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:36.795352   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.295506   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:37.794785   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.294797   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.794948   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.295478   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:39.795706   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:40.295444   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:38.581173   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:38.581201   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:38.581207   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:38.581220   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:38.581225   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:38.581239   46700 retry.go:31] will retry after 1.118915645s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:39.704807   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:39.704835   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:39.704841   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:39.704847   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:39.704854   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:39.704872   46700 retry.go:31] will retry after 1.49556329s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:41.205278   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:41.205316   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:41.205324   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:41.205331   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:41.205336   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:41.205357   46700 retry.go:31] will retry after 2.273757829s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:43.485079   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:43.485109   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:43.485125   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:43.485132   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:43.485137   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:43.485153   46700 retry.go:31] will retry after 2.2120181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:40.794725   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.295631   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:41.795542   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.295514   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:42.795481   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.295525   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:43.795463   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.295442   46374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:57:44.451570   46374 kubeadm.go:1088] duration metric: took 13.248732973s to wait for elevateKubeSystemPrivileges.
	I1205 20:57:44.451605   46374 kubeadm.go:406] StartCluster complete in 5m13.096778797s
	I1205 20:57:44.451631   46374 settings.go:142] acquiring lock: {Name:mk7bd5802137c74c7cc612ca187f45710edf5f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.451730   46374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:57:44.454306   46374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17731-6237/kubeconfig: {Name:mk6d81659fca166acee41a29ab2a86bccbe05e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:57:44.454587   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:57:44.454611   46374 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1205 20:57:44.454695   46374 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-331495"
	I1205 20:57:44.454720   46374 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-331495"
	W1205 20:57:44.454731   46374 addons.go:240] addon storage-provisioner should already be in state true
	I1205 20:57:44.454766   46374 addons.go:69] Setting default-storageclass=true in profile "embed-certs-331495"
	I1205 20:57:44.454781   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.454783   46374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-331495"
	I1205 20:57:44.454840   46374 config.go:182] Loaded profile config "embed-certs-331495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:57:44.454884   46374 addons.go:69] Setting metrics-server=true in profile "embed-certs-331495"
	I1205 20:57:44.454899   46374 addons.go:231] Setting addon metrics-server=true in "embed-certs-331495"
	W1205 20:57:44.454907   46374 addons.go:240] addon metrics-server should already be in state true
	I1205 20:57:44.454949   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.455191   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455213   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455216   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455231   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.455237   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.455259   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.473063   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I1205 20:57:44.473083   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I1205 20:57:44.473135   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I1205 20:57:44.473509   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.473642   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474153   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474171   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474179   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474197   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474336   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.474566   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474637   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.474761   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.474785   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.474877   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.475234   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475260   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.475295   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.475833   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.475871   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.478828   46374 addons.go:231] Setting addon default-storageclass=true in "embed-certs-331495"
	W1205 20:57:44.478852   46374 addons.go:240] addon default-storageclass should already be in state true
	I1205 20:57:44.478882   46374 host.go:66] Checking if "embed-certs-331495" exists ...
	I1205 20:57:44.479277   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.479311   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.493193   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1205 20:57:44.493380   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I1205 20:57:44.493637   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.493775   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.494092   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494108   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494242   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.494252   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.494488   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494624   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.494682   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.494834   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.496908   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.497156   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.498954   46374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:57:44.500583   46374 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 20:57:44.499205   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1205 20:57:44.502186   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:57:44.502199   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:57:44.502214   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.502313   46374 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.502329   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:57:44.502349   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.503728   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.504065   46374 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-331495" context rescaled to 1 replicas
	I1205 20:57:44.504105   46374 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:57:44.505773   46374 out.go:177] * Verifying Kubernetes components...
	I1205 20:57:44.507622   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:44.505350   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.507719   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.505638   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.507792   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.507821   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.506710   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.507399   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508237   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.508287   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508353   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.508369   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.508440   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.508506   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.508671   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.508678   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.508996   46374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:57:44.509016   46374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:57:44.509373   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.509567   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.525720   46374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I1205 20:57:44.526352   46374 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:57:44.526817   46374 main.go:141] libmachine: Using API Version  1
	I1205 20:57:44.526831   46374 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:57:44.527096   46374 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:57:44.527248   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetState
	I1205 20:57:44.529415   46374 main.go:141] libmachine: (embed-certs-331495) Calling .DriverName
	I1205 20:57:44.529714   46374 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.529725   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:57:44.529737   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHHostname
	I1205 20:57:44.532475   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533019   46374 main.go:141] libmachine: (embed-certs-331495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:87:db", ip: ""} in network mk-embed-certs-331495: {Iface:virbr4 ExpiryTime:2023-12-05 21:52:15 +0000 UTC Type:0 Mac:52:54:00:95:87:db Iaid: IPaddr:192.168.72.180 Prefix:24 Hostname:embed-certs-331495 Clientid:01:52:54:00:95:87:db}
	I1205 20:57:44.533042   46374 main.go:141] libmachine: (embed-certs-331495) DBG | domain embed-certs-331495 has defined IP address 192.168.72.180 and MAC address 52:54:00:95:87:db in network mk-embed-certs-331495
	I1205 20:57:44.533250   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHPort
	I1205 20:57:44.533393   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHKeyPath
	I1205 20:57:44.533527   46374 main.go:141] libmachine: (embed-certs-331495) Calling .GetSSHUsername
	I1205 20:57:44.533614   46374 sshutil.go:53] new ssh client: &{IP:192.168.72.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/embed-certs-331495/id_rsa Username:docker}
	I1205 20:57:44.688130   46374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:57:44.688235   46374 node_ready.go:35] waiting up to 6m0s for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727420   46374 node_ready.go:49] node "embed-certs-331495" has status "Ready":"True"
	I1205 20:57:44.727442   46374 node_ready.go:38] duration metric: took 39.185885ms waiting for node "embed-certs-331495" to be "Ready" ...
	I1205 20:57:44.727450   46374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:44.732130   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:57:44.732147   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 20:57:44.738201   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:57:44.771438   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:57:44.811415   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:57:44.811441   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:57:44.813276   46374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:44.891164   46374 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:44.891188   46374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:57:44.982166   46374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:57:46.640482   46374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.952307207s)
	I1205 20:57:46.640514   46374 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1205 20:57:46.640492   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.902257941s)
	I1205 20:57:46.640549   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640567   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.640954   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.640974   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.640985   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.640994   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.641299   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.641316   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:46.641317   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669046   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:46.669072   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:46.669393   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:46.669467   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:46.669486   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229043   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.457564146s)
	I1205 20:57:47.229106   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229122   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.229427   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.229442   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.229451   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.229460   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.230375   46374 main.go:141] libmachine: (embed-certs-331495) DBG | Closing plugin on server side
	I1205 20:57:47.230383   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.230399   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.269645   46374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.287430037s)
	I1205 20:57:47.269701   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.269717   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270028   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270044   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270053   46374 main.go:141] libmachine: Making call to close driver server
	I1205 20:57:47.270062   46374 main.go:141] libmachine: (embed-certs-331495) Calling .Close
	I1205 20:57:47.270370   46374 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:57:47.270387   46374 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:57:47.270397   46374 addons.go:467] Verifying addon metrics-server=true in "embed-certs-331495"
	I1205 20:57:47.272963   46374 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 20:57:45.704352   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:45.704382   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:45.704392   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:45.704402   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:45.704408   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:45.704427   46700 retry.go:31] will retry after 3.581529213s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:47.274340   46374 addons.go:502] enable addons completed in 2.819728831s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 20:57:47.280325   46374 pod_ready.go:102] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"False"
	I1205 20:57:48.746184   46374 pod_ready.go:92] pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.746205   46374 pod_ready.go:81] duration metric: took 3.932903963s waiting for pod "coredns-5dd5756b68-6d7wq" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.746212   46374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752060   46374 pod_ready.go:92] pod "etcd-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.752078   46374 pod_ready.go:81] duration metric: took 5.859638ms waiting for pod "etcd-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.752088   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757347   46374 pod_ready.go:92] pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.757367   46374 pod_ready.go:81] duration metric: took 5.273527ms waiting for pod "kube-apiserver-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.757375   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762850   46374 pod_ready.go:92] pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.762869   46374 pod_ready.go:81] duration metric: took 5.4878ms waiting for pod "kube-controller-manager-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.762876   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767874   46374 pod_ready.go:92] pod "kube-proxy-tbr8k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:48.767896   46374 pod_ready.go:81] duration metric: took 5.013139ms waiting for pod "kube-proxy-tbr8k" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:48.767907   46374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141813   46374 pod_ready.go:92] pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace has status "Ready":"True"
	I1205 20:57:49.141836   46374 pod_ready.go:81] duration metric: took 373.922185ms waiting for pod "kube-scheduler-embed-certs-331495" in "kube-system" namespace to be "Ready" ...
	I1205 20:57:49.141844   46374 pod_ready.go:38] duration metric: took 4.414384404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:57:49.141856   46374 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:57:49.141898   46374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:57:49.156536   46374 api_server.go:72] duration metric: took 4.652397468s to wait for apiserver process to appear ...
	I1205 20:57:49.156566   46374 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:57:49.156584   46374 api_server.go:253] Checking apiserver healthz at https://192.168.72.180:8443/healthz ...
	I1205 20:57:49.162837   46374 api_server.go:279] https://192.168.72.180:8443/healthz returned 200:
	ok
	I1205 20:57:49.164588   46374 api_server.go:141] control plane version: v1.28.4
	I1205 20:57:49.164606   46374 api_server.go:131] duration metric: took 8.03498ms to wait for apiserver health ...
	I1205 20:57:49.164613   46374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:57:49.346033   46374 system_pods.go:59] 8 kube-system pods found
	I1205 20:57:49.346065   46374 system_pods.go:61] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.346069   46374 system_pods.go:61] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.346074   46374 system_pods.go:61] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.346079   46374 system_pods.go:61] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.346082   46374 system_pods.go:61] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.346086   46374 system_pods.go:61] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.346092   46374 system_pods.go:61] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.346098   46374 system_pods.go:61] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 20:57:49.346105   46374 system_pods.go:74] duration metric: took 181.48718ms to wait for pod list to return data ...
	I1205 20:57:49.346111   46374 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:57:49.541758   46374 default_sa.go:45] found service account: "default"
	I1205 20:57:49.541783   46374 default_sa.go:55] duration metric: took 195.666774ms for default service account to be created ...
	I1205 20:57:49.541791   46374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:57:49.746101   46374 system_pods.go:86] 8 kube-system pods found
	I1205 20:57:49.746131   46374 system_pods.go:89] "coredns-5dd5756b68-6d7wq" [c4525c8a-b7e3-450f-bdb4-12dfeb0ff203] Running
	I1205 20:57:49.746136   46374 system_pods.go:89] "etcd-embed-certs-331495" [4303e650-22fe-44a7-b2d8-e5acd4637a1d] Running
	I1205 20:57:49.746142   46374 system_pods.go:89] "kube-apiserver-embed-certs-331495" [404121f0-2eca-41d8-a0bf-5c47f53a5d34] Running
	I1205 20:57:49.746147   46374 system_pods.go:89] "kube-controller-manager-embed-certs-331495" [289f12fc-bfe9-44bb-a392-ef7c4eb6984d] Running
	I1205 20:57:49.746150   46374 system_pods.go:89] "kube-proxy-tbr8k" [8138c69a-41ce-4880-b2ac-274dff0bdeba] Running
	I1205 20:57:49.746155   46374 system_pods.go:89] "kube-scheduler-embed-certs-331495" [eb895ae6-b984-43dd-a507-8b2d507ad62d] Running
	I1205 20:57:49.746170   46374 system_pods.go:89] "metrics-server-57f55c9bc5-wv2t6" [4cd8c975-aaf4-4ae0-9e6a-f644978f4127] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.746175   46374 system_pods.go:89] "storage-provisioner" [5c366deb-4564-44b8-87fe-45e03cf7a774] Running
	I1205 20:57:49.746183   46374 system_pods.go:126] duration metric: took 204.388635ms to wait for k8s-apps to be running ...
	I1205 20:57:49.746193   46374 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:57:49.746241   46374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:57:49.764758   46374 system_svc.go:56] duration metric: took 18.554759ms WaitForService to wait for kubelet.
	I1205 20:57:49.764784   46374 kubeadm.go:581] duration metric: took 5.260652386s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:57:49.764801   46374 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:57:49.942067   46374 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:57:49.942095   46374 node_conditions.go:123] node cpu capacity is 2
	I1205 20:57:49.942105   46374 node_conditions.go:105] duration metric: took 177.300297ms to run NodePressure ...
	I1205 20:57:49.942114   46374 start.go:228] waiting for startup goroutines ...
	I1205 20:57:49.942120   46374 start.go:233] waiting for cluster config update ...
	I1205 20:57:49.942129   46374 start.go:242] writing updated cluster config ...
	I1205 20:57:49.942407   46374 ssh_runner.go:195] Run: rm -f paused
	I1205 20:57:49.995837   46374 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1205 20:57:49.997691   46374 out.go:177] * Done! kubectl is now configured to use "embed-certs-331495" cluster and "default" namespace by default
	I1205 20:57:49.291672   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:49.291700   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:49.291705   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:49.291713   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:49.291718   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:49.291736   46700 retry.go:31] will retry after 3.015806566s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:52.313677   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:52.313703   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:52.313711   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:52.313721   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:52.313727   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:52.313747   46700 retry.go:31] will retry after 4.481475932s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:57:56.804282   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:57:56.804308   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:57:56.804314   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:57:56.804321   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:57:56.804325   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:57:56.804340   46700 retry.go:31] will retry after 6.744179014s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:03.556623   46700 system_pods.go:86] 4 kube-system pods found
	I1205 20:58:03.556652   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:03.556660   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:03.556669   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:03.556676   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:03.556696   46700 retry.go:31] will retry after 7.974872066s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:11.540488   46700 system_pods.go:86] 6 kube-system pods found
	I1205 20:58:11.540516   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:11.540522   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Pending
	I1205 20:58:11.540526   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Pending
	I1205 20:58:11.540530   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:11.540537   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:11.540541   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:11.540556   46700 retry.go:31] will retry after 10.29278609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1205 20:58:21.841415   46700 system_pods.go:86] 7 kube-system pods found
	I1205 20:58:21.841442   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:21.841450   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:21.841457   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:21.841463   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:21.841468   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:21.841478   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:21.841485   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:21.841503   46700 retry.go:31] will retry after 10.997616244s: missing components: kube-scheduler
	I1205 20:58:32.846965   46700 system_pods.go:86] 8 kube-system pods found
	I1205 20:58:32.846999   46700 system_pods.go:89] "coredns-5644d7b6d9-qm52j" [19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2] Running
	I1205 20:58:32.847007   46700 system_pods.go:89] "etcd-old-k8s-version-061206" [180c0d08-2361-4336-9dff-3d3bd5dfc247] Running
	I1205 20:58:32.847016   46700 system_pods.go:89] "kube-apiserver-old-k8s-version-061206" [a9dc4527-e795-4510-98b7-8e52c52f1a6f] Running
	I1205 20:58:32.847023   46700 system_pods.go:89] "kube-controller-manager-old-k8s-version-061206" [4ff08ab8-bc1c-481f-b59a-1184fc8d22da] Running
	I1205 20:58:32.847028   46700 system_pods.go:89] "kube-proxy-j68qr" [857e6815-cb4c-477d-af24-941a37f65f6a] Running
	I1205 20:58:32.847032   46700 system_pods.go:89] "kube-scheduler-old-k8s-version-061206" [e19a40ac-ac9b-4dc8-8ed3-c13da266bb88] Running
	I1205 20:58:32.847041   46700 system_pods.go:89] "metrics-server-74d5856cc6-jbxkl" [ea6e50b4-4224-441e-878d-bff37f046528] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:58:32.847049   46700 system_pods.go:89] "storage-provisioner" [9e493874-629d-4446-b372-47fa158aea4a] Running
	I1205 20:58:32.847061   46700 system_pods.go:126] duration metric: took 58.26978612s to wait for k8s-apps to be running ...
	I1205 20:58:32.847074   46700 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:58:32.847122   46700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:58:32.866233   46700 system_svc.go:56] duration metric: took 19.150294ms WaitForService to wait for kubelet.
	I1205 20:58:32.866267   46700 kubeadm.go:581] duration metric: took 1m7.409317219s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1205 20:58:32.866308   46700 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:58:32.870543   46700 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1205 20:58:32.870569   46700 node_conditions.go:123] node cpu capacity is 2
	I1205 20:58:32.870581   46700 node_conditions.go:105] duration metric: took 4.266682ms to run NodePressure ...
	I1205 20:58:32.870604   46700 start.go:228] waiting for startup goroutines ...
	I1205 20:58:32.870630   46700 start.go:233] waiting for cluster config update ...
	I1205 20:58:32.870646   46700 start.go:242] writing updated cluster config ...
	I1205 20:58:32.870888   46700 ssh_runner.go:195] Run: rm -f paused
	I1205 20:58:32.922554   46700 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1205 20:58:32.924288   46700 out.go:177] 
	W1205 20:58:32.925788   46700 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1205 20:58:32.927148   46700 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1205 20:58:32.928730   46700 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-061206" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-12-05 20:51:14 UTC, ends at Tue 2023-12-05 21:11:04 UTC. --
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.637261836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dba98078-6d4e-402a-920f-afbc93945873 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.661102343Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dda1272b-0f29-4d53-bc32-f989d1185acb name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.661318674Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4f40d5a209a3c62bdfb930e5af33656b757ad71b380226f4627ef832b960c4bf,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-jbxkl,Uid:ea6e50b4-4224-441e-878d-bff37f046528,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809848674161472,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-jbxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6e50b4-4224-441e-878d-bff37f046528,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:28.319988569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9e493874-629d-4446-b372-47fa158aea
4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809847819747975,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-05T20:57:27.471462234Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qm52j,Uid:19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809846203737481,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.475186936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&PodSandboxMetadata{Name:kube-proxy-j68qr,Uid:857e6815-cb4c-477d-af2
4-941a37f65f6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809845770031150,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.42285965Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-061206,Uid:42f4027feb4c207207ef36a204ac558e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817198887929,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207
207ef36a204ac558e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 42f4027feb4c207207ef36a204ac558e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645985869Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-061206,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817159423636,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-05T20:56:56.645982018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a55020c50209daa1d78e8a3b3c
68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-061206,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817154900515,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645968944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-061206,Uid:9a4cd076e0e3bb6062b3f80cd3aea422,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817150165984,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a4cd076e0e3bb6062b3f80cd3aea422,kubernetes.io/config.seen: 2023-12-05T20:56:56.645984155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=dda1272b-0f29-4d53-bc32-f989d1185acb name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.663259197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=384ed21f-7c8c-44b4-b0bd-3bfbdf3ac887 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.663312632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=384ed21f-7c8c-44b4-b0bd-3bfbdf3ac887 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.663462438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=384ed21f-7c8c-44b4-b0bd-3bfbdf3ac887 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.665808315Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=58f7902c-797f-4d70-a20f-b1767e390602 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.666007784Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4f40d5a209a3c62bdfb930e5af33656b757ad71b380226f4627ef832b960c4bf,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-jbxkl,Uid:ea6e50b4-4224-441e-878d-bff37f046528,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809848674161472,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-jbxkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea6e50b4-4224-441e-878d-bff37f046528,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:28.319988569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9e493874-629d-4446-b372-47fa158aea
4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809847819747975,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-12-05T20:57:27.471462234Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-qm52j,Uid:19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809846203737481,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.475186936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&PodSandboxMetadata{Name:kube-proxy-j68qr,Uid:857e6815-cb4c-477d-af2
4-941a37f65f6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809845770031150,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-12-05T20:57:25.42285965Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-061206,Uid:42f4027feb4c207207ef36a204ac558e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817198887929,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207
207ef36a204ac558e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 42f4027feb4c207207ef36a204ac558e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645985869Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-061206,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817159423636,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-12-05T20:56:56.645982018Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a55020c50209daa1d78e8a3b3c
68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-061206,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817154900515,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-12-05T20:56:56.645968944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-061206,Uid:9a4cd076e0e3bb6062b3f80cd3aea422,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701809817150165984,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9a4cd076e0e3bb6062b3f80cd3aea422,kubernetes.io/config.seen: 2023-12-05T20:56:56.645984155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=58f7902c-797f-4d70-a20f-b1767e390602 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.666784384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c024f119-c80f-411f-bea9-87bc715704fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.666838142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c024f119-c80f-411f-bea9-87bc715704fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.667015653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c024f119-c80f-411f-bea9-87bc715704fa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.683116213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c9d01e97-3f68-4169-9d3e-6ade3ed3208e name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.683174369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c9d01e97-3f68-4169-9d3e-6ade3ed3208e name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.685016732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d0cff960-0749-4952-bd6b-fc8b87a6af96 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.685522693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810664685507788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=d0cff960-0749-4952-bd6b-fc8b87a6af96 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.686105753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3a576d0-7ca9-4e9e-96ee-471ad3726346 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.686149103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3a576d0-7ca9-4e9e-96ee-471ad3726346 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.686345472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3a576d0-7ca9-4e9e-96ee-471ad3726346 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.726932789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d59b37a5-233f-4de8-853a-c490ca1012e6 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.726995274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d59b37a5-233f-4de8-853a-c490ca1012e6 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.728611420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=099aaa85-a79e-4fbe-aa28-8d659f2edd58 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.729038872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701810664729024446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=099aaa85-a79e-4fbe-aa28-8d659f2edd58 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.729500833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=282d24e8-624a-4ab7-9dbc-859c51ca7d8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.729608259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=282d24e8-624a-4ab7-9dbc-859c51ca7d8e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:11:04 old-k8s-version-061206 crio[708]: time="2023-12-05 21:11:04.729832298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61,PodSandboxId:5761d98d74764da9a9d697fae784b60ace2a3093167fabc5c672e016a3ab6f4a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701809848556367222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e493874-629d-4446-b372-47fa158aea4a,},Annotations:map[string]string{io.kubernetes.container.hash: 74af45c1,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c,PodSandboxId:c47670f9603a702ca281b95735fa7b804148a6c20a81c2d23ee1854464ed493a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701809847929886683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j68qr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 857e6815-cb4c-477d-af24-941a37f65f6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d5da940,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1,PodSandboxId:47f738c52328de3b04b9f447a188f8f2a5d89abb8109abcf38ff8fc2bcdf3919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701809846965081140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-qm52j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19e1bbcc-b8e1-4052-b5ae-bc92a75e12e2,},Annotations:map[string]string{io.kubernetes.container.hash: b50ba58f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f,PodSandboxId:877976c08d2da24e6f98be354aa55047bc8b4de7d05ab3eafc98504cf1055ddd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701809819371692419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a4cd076e0e3bb6062b3f80cd3aea422,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 194c8a32,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117,PodSandboxId:151640bbfafca5988dbe7e39c1e4d335d34381c008f30ab62814c7cc8f87d3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701809817979065742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a,PodSandboxId:a55020c50209daa1d78e8a3b3c68d062c0e2e1403a5bb15c727126359636c3ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701809817942153063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac,PodSandboxId:1f1f86ca5bcbb8876cc56b2ffc0a103cd8736fe02bae71856f9e42f88982d241,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701809817729521046,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-061206,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f4027feb4c207207ef36a204ac558e,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c3a92486,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=282d24e8-624a-4ab7-9dbc-859c51ca7d8e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	68ef4ccaf4b56       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   5761d98d74764       storage-provisioner
	a508a24c599b8       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   c47670f9603a7       kube-proxy-j68qr
	c55c7658d0763       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   47f738c52328d       coredns-5644d7b6d9-qm52j
	aeedb4418156e       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   14 minutes ago      Running             etcd                      0                   877976c08d2da       etcd-old-k8s-version-061206
	d6f271a2baa00       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   14 minutes ago      Running             kube-scheduler            0                   151640bbfafca       kube-scheduler-old-k8s-version-061206
	aa2ee8e9a505e       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   14 minutes ago      Running             kube-controller-manager   0                   a55020c50209d       kube-controller-manager-old-k8s-version-061206
	25d5772a51c5b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   14 minutes ago      Running             kube-apiserver            0                   1f1f86ca5bcbb       kube-apiserver-old-k8s-version-061206
	
	* 
	* ==> coredns [c55c7658d07639cfc52c0b172d4c2d00665d440ac0806e472584efe981b887a1] <==
	* .:53
	2023-12-05T20:57:27.313Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-12-05T20:57:27.313Z [INFO] CoreDNS-1.6.2
	2023-12-05T20:57:27.313Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-12-05T20:57:57.150Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-12-05T20:57:57.159Z [INFO] 127.0.0.1:60377 - 47643 "HINFO IN 8905990356429537435.7948724483024421708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008839679s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-061206
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-061206
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7b46af276bae825d70472f5e115d38eac802d728
	                    minikube.k8s.io/name=old-k8s-version-061206
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_05T20_57_09_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Dec 2023 20:57:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Dec 2023 21:11:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Dec 2023 21:11:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Dec 2023 21:11:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Dec 2023 21:11:04 +0000   Tue, 05 Dec 2023 20:56:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.116
	  Hostname:    old-k8s-version-061206
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 490ff4de3cc346cbadefc512ca4ba833
	 System UUID:                490ff4de-3cc3-46cb-adef-c512ca4ba833
	 Boot ID:                    6369e2b2-de47-44a7-be57-652fcb308eee
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-qm52j                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-061206                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-061206             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-061206    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-j68qr                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-061206             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-jbxkl                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet, old-k8s-version-061206     Node old-k8s-version-061206 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet, old-k8s-version-061206     Node old-k8s-version-061206 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet, old-k8s-version-061206     Node old-k8s-version-061206 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-061206  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 5 20:51] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066822] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.377184] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.456566] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153272] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.490349] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.412969] systemd-fstab-generator[633]: Ignoring "noauto" for root device
	[  +0.122033] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.155842] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.129914] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.243242] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[ +20.162291] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +0.500222] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +15.991973] kauditd_printk_skb: 3 callbacks suppressed
	[Dec 5 20:52] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.246172] hrtimer: interrupt took 4422961 ns
	[Dec 5 20:56] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +1.269085] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 5 20:57] kauditd_printk_skb: 11 callbacks suppressed
	
	* 
	* ==> etcd [aeedb4418156ebb70d7c5ff4040152197b4a3ddf15f70e275b866c6504986a0f] <==
	* 2023-12-05 20:56:59.527122 I | raft: 70e810c2542c58a7 became follower at term 1
	2023-12-05 20:56:59.537329 W | auth: simple token is not cryptographically signed
	2023-12-05 20:56:59.543224 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-12-05 20:56:59.545066 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-05 20:56:59.545234 I | embed: listening for metrics on http://192.168.50.116:2381
	2023-12-05 20:56:59.545498 I | etcdserver: 70e810c2542c58a7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-05 20:56:59.546168 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-05 20:56:59.546402 I | etcdserver/membership: added member 70e810c2542c58a7 [https://192.168.50.116:2380] to cluster 938c7bbb9c530c74
	2023-12-05 20:56:59.927823 I | raft: 70e810c2542c58a7 is starting a new election at term 1
	2023-12-05 20:56:59.927981 I | raft: 70e810c2542c58a7 became candidate at term 2
	2023-12-05 20:56:59.928176 I | raft: 70e810c2542c58a7 received MsgVoteResp from 70e810c2542c58a7 at term 2
	2023-12-05 20:56:59.928309 I | raft: 70e810c2542c58a7 became leader at term 2
	2023-12-05 20:56:59.928458 I | raft: raft.node: 70e810c2542c58a7 elected leader 70e810c2542c58a7 at term 2
	2023-12-05 20:56:59.928840 I | etcdserver: setting up the initial cluster version to 3.3
	2023-12-05 20:56:59.930841 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-12-05 20:56:59.930926 I | etcdserver/api: enabled capabilities for version 3.3
	2023-12-05 20:56:59.930954 I | etcdserver: published {Name:old-k8s-version-061206 ClientURLs:[https://192.168.50.116:2379]} to cluster 938c7bbb9c530c74
	2023-12-05 20:56:59.930999 I | embed: ready to serve client requests
	2023-12-05 20:56:59.931207 I | embed: ready to serve client requests
	2023-12-05 20:56:59.932986 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-05 20:56:59.935124 I | embed: serving client requests on 192.168.50.116:2379
	2023-12-05 20:57:25.583611 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-j68qr\" " with result "range_response_count:1 size:1746" took too long (140.247659ms) to execute
	2023-12-05 20:57:25.769052 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:3 size:4840" took too long (100.291551ms) to execute
	2023-12-05 21:07:00.979045 I | mvcc: store.index: compact 670
	2023-12-05 21:07:00.982051 I | mvcc: finished scheduled compaction at 670 (took 2.217632ms)
	
	* 
	* ==> kernel <==
	*  21:11:05 up 19 min,  0 users,  load average: 0.04, 0.14, 0.22
	Linux old-k8s-version-061206 5.10.57 #1 SMP Fri Dec 1 04:24:04 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [25d5772a51c5bfab8223c1bb01c52820be390708b75e44e1d1e90402e27283ac] <==
	* I1205 21:03:05.250038       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:03:05.250143       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:03:05.250182       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:03:05.250194       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:05:05.250701       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:05:05.251184       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:05:05.251340       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:05:05.251391       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:07:05.252411       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:07:05.252839       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:07:05.252922       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:07:05.252945       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:08:05.253421       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:08:05.253588       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:08:05.253651       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:08:05.253665       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:10:05.254305       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1205 21:10:05.254822       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:10:05.254925       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1205 21:10:05.254981       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [aa2ee8e9a505e5f28ccbfe44439b24f7a154d3960ef434efb744131bdcf2b34a] <==
	* W1205 21:04:53.344733       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:04:59.007857       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:05:25.347133       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:05:29.259901       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:05:57.350247       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:05:59.513316       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:06:29.352713       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:06:29.765959       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1205 21:07:00.019191       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:07:01.355111       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:07:30.271239       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:07:33.357959       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:08:00.523475       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:08:05.360718       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:08:30.775821       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:08:37.363158       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:09:01.028097       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:09:09.365382       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:09:31.279987       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:09:41.367614       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:10:01.532056       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:10:13.370074       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:10:31.784409       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1205 21:10:45.372274       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1205 21:11:02.036673       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [a508a24c599b84f94dc4d61498ed6ad314fa43b29f10818c582204b954c4369c] <==
	* W1205 20:57:28.214900       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1205 20:57:28.228785       1 node.go:135] Successfully retrieved node IP: 192.168.50.116
	I1205 20:57:28.228876       1 server_others.go:149] Using iptables Proxier.
	I1205 20:57:28.231870       1 server.go:529] Version: v1.16.0
	I1205 20:57:28.234399       1 config.go:313] Starting service config controller
	I1205 20:57:28.234460       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1205 20:57:28.234504       1 config.go:131] Starting endpoints config controller
	I1205 20:57:28.234611       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1205 20:57:28.338021       1 shared_informer.go:204] Caches are synced for service config 
	I1205 20:57:28.339066       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d6f271a2baa00b0b7ec940eb0cf812fb212dfda7f0f754e36b368599e85f9117] <==
	* W1205 20:57:04.259050       1 authentication.go:79] Authentication is disabled
	I1205 20:57:04.259073       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1205 20:57:04.259418       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1205 20:57:04.298130       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:57:04.302985       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:57:04.303262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:57:04.303365       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:57:04.304204       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:57:04.304329       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:04.309111       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:04.309328       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:57:04.309406       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:57:04.309466       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:57:04.312787       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 20:57:05.300066       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:57:05.306058       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:57:05.311327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:57:05.314448       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:57:05.316047       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 20:57:05.318331       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:05.320953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:57:05.323778       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:57:05.323932       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:57:05.326249       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:57:05.326367       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-12-05 20:51:14 UTC, ends at Tue 2023-12-05 21:11:05 UTC. --
	Dec 05 21:06:48 old-k8s-version-061206 kubelet[3199]: E1205 21:06:48.658943    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:06:56 old-k8s-version-061206 kubelet[3199]: E1205 21:06:56.758369    3199 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Dec 05 21:07:02 old-k8s-version-061206 kubelet[3199]: E1205 21:07:02.658356    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:07:16 old-k8s-version-061206 kubelet[3199]: E1205 21:07:16.658753    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:07:27 old-k8s-version-061206 kubelet[3199]: E1205 21:07:27.658429    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:07:38 old-k8s-version-061206 kubelet[3199]: E1205 21:07:38.657961    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:07:51 old-k8s-version-061206 kubelet[3199]: E1205 21:07:51.658746    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:08:04 old-k8s-version-061206 kubelet[3199]: E1205 21:08:04.658131    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:08:17 old-k8s-version-061206 kubelet[3199]: E1205 21:08:17.675651    3199 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:08:17 old-k8s-version-061206 kubelet[3199]: E1205 21:08:17.675733    3199 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:08:17 old-k8s-version-061206 kubelet[3199]: E1205 21:08:17.675789    3199 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Dec 05 21:08:17 old-k8s-version-061206 kubelet[3199]: E1205 21:08:17.675828    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Dec 05 21:08:31 old-k8s-version-061206 kubelet[3199]: E1205 21:08:31.658401    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:08:46 old-k8s-version-061206 kubelet[3199]: E1205 21:08:46.660848    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:08:59 old-k8s-version-061206 kubelet[3199]: E1205 21:08:59.658412    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:09:11 old-k8s-version-061206 kubelet[3199]: E1205 21:09:11.657466    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:09:25 old-k8s-version-061206 kubelet[3199]: E1205 21:09:25.658115    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:09:37 old-k8s-version-061206 kubelet[3199]: E1205 21:09:37.657947    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:09:48 old-k8s-version-061206 kubelet[3199]: E1205 21:09:48.658152    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:09:59 old-k8s-version-061206 kubelet[3199]: E1205 21:09:59.658022    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:10:14 old-k8s-version-061206 kubelet[3199]: E1205 21:10:14.658412    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:10:25 old-k8s-version-061206 kubelet[3199]: E1205 21:10:25.658310    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:10:37 old-k8s-version-061206 kubelet[3199]: E1205 21:10:37.657750    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:10:48 old-k8s-version-061206 kubelet[3199]: E1205 21:10:48.657675    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Dec 05 21:11:03 old-k8s-version-061206 kubelet[3199]: E1205 21:11:03.658651    3199 pod_workers.go:191] Error syncing pod ea6e50b4-4224-441e-878d-bff37f046528 ("metrics-server-74d5856cc6-jbxkl_kube-system(ea6e50b4-4224-441e-878d-bff37f046528)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [68ef4ccaf4b56cbcea18e3c77f94ac178abf6ab76e57eb38bd39d27607c4ba61] <==
	* I1205 20:57:28.741057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:57:28.755001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:57:28.756103       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:57:28.765065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:57:28.766429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061206_03953d21-9e9c-494b-846f-6389df00f948!
	I1205 20:57:28.766704       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1d7830a-c358-4e1a-91a1-982b7108f3e1", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-061206_03953d21-9e9c-494b-846f-6389df00f948 became leader
	I1205 20:57:28.870168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-061206_03953d21-9e9c-494b-846f-6389df00f948!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061206 -n old-k8s-version-061206
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-061206 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-jbxkl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-061206 describe pod metrics-server-74d5856cc6-jbxkl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-061206 describe pod metrics-server-74d5856cc6-jbxkl: exit status 1 (83.448695ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-jbxkl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-061206 describe pod metrics-server-74d5856cc6-jbxkl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (210.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-051721 --alsologtostderr -v=3
E1205 21:12:37.060271   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 21:12:46.651513   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-051721 --alsologtostderr -v=3: exit status 82 (2m1.750876209s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-051721"  ...
	* Stopping node "newest-cni-051721"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:12:10.062591   53115 out.go:296] Setting OutFile to fd 1 ...
	I1205 21:12:10.062789   53115 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 21:12:10.062801   53115 out.go:309] Setting ErrFile to fd 2...
	I1205 21:12:10.062808   53115 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 21:12:10.063120   53115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 21:12:10.063506   53115 out.go:303] Setting JSON to false
	I1205 21:12:10.063631   53115 mustload.go:65] Loading cluster: newest-cni-051721
	I1205 21:12:10.064155   53115 config.go:182] Loaded profile config "newest-cni-051721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 21:12:10.064274   53115 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/newest-cni-051721/config.json ...
	I1205 21:12:10.064504   53115 mustload.go:65] Loading cluster: newest-cni-051721
	I1205 21:12:10.064674   53115 config.go:182] Loaded profile config "newest-cni-051721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.1
	I1205 21:12:10.064711   53115 stop.go:39] StopHost: newest-cni-051721
	I1205 21:12:10.065303   53115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:12:10.065363   53115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:12:10.084291   53115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I1205 21:12:10.084806   53115 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:12:10.085555   53115 main.go:141] libmachine: Using API Version  1
	I1205 21:12:10.085577   53115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:12:10.085932   53115 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:12:10.088633   53115 out.go:177] * Stopping node "newest-cni-051721"  ...
	I1205 21:12:10.090209   53115 main.go:141] libmachine: Stopping "newest-cni-051721"...
	I1205 21:12:10.090231   53115 main.go:141] libmachine: (newest-cni-051721) Calling .GetState
	I1205 21:12:10.092296   53115 main.go:141] libmachine: (newest-cni-051721) Calling .Stop
	I1205 21:12:10.096468   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 0/60
	I1205 21:12:11.098875   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 1/60
	I1205 21:12:12.101050   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 2/60
	I1205 21:12:13.103508   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 3/60
	I1205 21:12:14.105162   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 4/60
	I1205 21:12:15.107520   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 5/60
	I1205 21:12:16.109081   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 6/60
	I1205 21:12:17.110495   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 7/60
	I1205 21:12:18.113212   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 8/60
	I1205 21:12:19.114732   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 9/60
	I1205 21:12:20.116223   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 10/60
	I1205 21:12:21.117680   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 11/60
	I1205 21:12:22.119663   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 12/60
	I1205 21:12:23.121211   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 13/60
	I1205 21:12:24.122908   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 14/60
	I1205 21:12:25.125048   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 15/60
	I1205 21:12:26.126493   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 16/60
	I1205 21:12:27.129070   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 17/60
	I1205 21:12:28.130266   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 18/60
	I1205 21:12:29.131969   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 19/60
	I1205 21:12:30.134614   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 20/60
	I1205 21:12:31.137173   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 21/60
	I1205 21:12:32.138910   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 22/60
	I1205 21:12:33.141023   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 23/60
	I1205 21:12:34.143835   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 24/60
	I1205 21:12:35.145730   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 25/60
	I1205 21:12:36.147239   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 26/60
	I1205 21:12:37.149536   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 27/60
	I1205 21:12:38.151229   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 28/60
	I1205 21:12:39.153458   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 29/60
	I1205 21:12:40.155686   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 30/60
	I1205 21:12:41.157689   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 31/60
	I1205 21:12:42.159918   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 32/60
	I1205 21:12:43.161179   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 33/60
	I1205 21:12:44.163020   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 34/60
	I1205 21:12:45.164607   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 35/60
	I1205 21:12:46.166203   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 36/60
	I1205 21:12:47.167785   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 37/60
	I1205 21:12:48.169767   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 38/60
	I1205 21:12:49.171067   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 39/60
	I1205 21:12:50.173286   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 40/60
	I1205 21:12:51.174811   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 41/60
	I1205 21:12:52.176325   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 42/60
	I1205 21:12:53.178053   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 43/60
	I1205 21:12:54.179541   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 44/60
	I1205 21:12:55.181513   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 45/60
	I1205 21:12:56.183166   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 46/60
	I1205 21:12:57.184580   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 47/60
	I1205 21:12:58.185771   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 48/60
	I1205 21:12:59.187125   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 49/60
	I1205 21:13:00.188769   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 50/60
	I1205 21:13:01.190609   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 51/60
	I1205 21:13:02.193012   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 52/60
	I1205 21:13:03.195296   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 53/60
	I1205 21:13:04.197041   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 54/60
	I1205 21:13:05.198993   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 55/60
	I1205 21:13:06.200691   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 56/60
	I1205 21:13:07.202401   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 57/60
	I1205 21:13:08.204525   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 58/60
	I1205 21:13:09.206064   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 59/60
	I1205 21:13:10.207341   53115 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 21:13:10.207399   53115 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 21:13:10.207418   53115 retry.go:31] will retry after 1.35035734s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 21:13:11.558573   53115 stop.go:39] StopHost: newest-cni-051721
	I1205 21:13:11.559012   53115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:13:11.559067   53115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:13:11.573377   53115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I1205 21:13:11.573830   53115 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:13:11.574359   53115 main.go:141] libmachine: Using API Version  1
	I1205 21:13:11.574393   53115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:13:11.574846   53115 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:13:11.576923   53115 out.go:177] * Stopping node "newest-cni-051721"  ...
	I1205 21:13:11.578481   53115 main.go:141] libmachine: Stopping "newest-cni-051721"...
	I1205 21:13:11.578498   53115 main.go:141] libmachine: (newest-cni-051721) Calling .GetState
	I1205 21:13:11.580602   53115 main.go:141] libmachine: (newest-cni-051721) Calling .Stop
	I1205 21:13:11.584176   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 0/60
	I1205 21:13:12.585863   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 1/60
	I1205 21:13:13.587411   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 2/60
	I1205 21:13:14.588861   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 3/60
	I1205 21:13:15.590183   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 4/60
	I1205 21:13:16.592272   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 5/60
	I1205 21:13:17.593616   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 6/60
	I1205 21:13:18.595054   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 7/60
	I1205 21:13:19.596967   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 8/60
	I1205 21:13:20.599410   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 9/60
	I1205 21:13:21.601397   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 10/60
	I1205 21:13:22.603794   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 11/60
	I1205 21:13:23.605360   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 12/60
	I1205 21:13:24.607097   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 13/60
	I1205 21:13:25.608767   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 14/60
	I1205 21:13:26.610528   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 15/60
	I1205 21:13:27.613046   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 16/60
	I1205 21:13:28.615265   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 17/60
	I1205 21:13:29.617597   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 18/60
	I1205 21:13:30.619766   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 19/60
	I1205 21:13:31.621953   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 20/60
	I1205 21:13:32.623398   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 21/60
	I1205 21:13:33.625053   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 22/60
	I1205 21:13:34.626617   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 23/60
	I1205 21:13:35.628993   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 24/60
	I1205 21:13:36.631117   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 25/60
	I1205 21:13:37.632966   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 26/60
	I1205 21:13:38.634936   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 27/60
	I1205 21:13:39.637185   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 28/60
	I1205 21:13:40.638859   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 29/60
	I1205 21:13:41.640815   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 30/60
	I1205 21:13:42.642383   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 31/60
	I1205 21:13:43.643782   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 32/60
	I1205 21:13:44.676779   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 33/60
	I1205 21:13:45.678566   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 34/60
	I1205 21:13:46.680016   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 35/60
	I1205 21:13:47.681665   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 36/60
	I1205 21:13:48.683591   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 37/60
	I1205 21:13:49.685941   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 38/60
	I1205 21:13:50.687556   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 39/60
	I1205 21:13:51.689609   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 40/60
	I1205 21:13:52.691557   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 41/60
	I1205 21:13:53.693137   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 42/60
	I1205 21:13:54.695779   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 43/60
	I1205 21:13:55.697390   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 44/60
	I1205 21:13:56.700651   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 45/60
	I1205 21:13:57.702232   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 46/60
	I1205 21:13:58.704699   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 47/60
	I1205 21:13:59.706327   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 48/60
	I1205 21:14:00.707685   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 49/60
	I1205 21:14:01.709569   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 50/60
	I1205 21:14:02.710946   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 51/60
	I1205 21:14:03.712430   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 52/60
	I1205 21:14:04.714314   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 53/60
	I1205 21:14:05.715754   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 54/60
	I1205 21:14:06.717488   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 55/60
	I1205 21:14:07.719096   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 56/60
	I1205 21:14:08.720405   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 57/60
	I1205 21:14:09.721757   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 58/60
	I1205 21:14:10.723063   53115 main.go:141] libmachine: (newest-cni-051721) Waiting for machine to stop 59/60
	I1205 21:14:11.723773   53115 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1205 21:14:11.723822   53115 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 21:14:11.726283   53115 out.go:177] 
	W1205 21:14:11.727941   53115 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 21:14:11.727959   53115 out.go:239] * 
	* 
	W1205 21:14:11.730961   53115 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:14:11.732361   53115 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-051721 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721
E1205 21:14:13.487789   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:18.608778   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:22.906420   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:22.911700   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:22.922011   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:22.942369   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:22.982715   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:23.063904   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:23.224474   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:23.545286   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:24.185638   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:25.466429   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:28.027247   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:28.849606   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721: exit status 3 (18.621308572s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:14:30.354606   56461 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host
	E1205 21:14:30.354630   56461 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-051721" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721
E1205 21:14:33.147828   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721: exit status 3 (3.199721486s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:14:33.554699   56659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host
	E1205 21:14:33.554721   56659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-051721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-051721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1523074s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-051721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721: exit status 3 (3.066443515s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:14:42.774563   56729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host
	E1205 21:14:42.774583   56729 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.252:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-051721" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                    

Test pass (232/301)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.95
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 5.64
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.1/json-events 9.02
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.57
27 TestOffline 69.04
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 142.95
34 TestAddons/parallel/Registry 14.91
37 TestAddons/parallel/MetricsServer 5.96
38 TestAddons/parallel/HelmTiller 11.94
40 TestAddons/parallel/CSI 65.96
41 TestAddons/parallel/Headlamp 14.4
42 TestAddons/parallel/CloudSpanner 5.68
43 TestAddons/parallel/LocalPath 10.01
44 TestAddons/parallel/NvidiaDevicePlugin 5.64
47 TestAddons/serial/GCPAuth/Namespaces 0.14
49 TestCertOptions 65.9
50 TestCertExpiration 300.19
52 TestForceSystemdFlag 50.61
53 TestForceSystemdEnv 68.49
55 TestKVMDriverInstallOrUpdate 1.37
59 TestErrorSpam/setup 45.33
60 TestErrorSpam/start 0.38
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.61
63 TestErrorSpam/unpause 1.76
64 TestErrorSpam/stop 2.26
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 98.8
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 38.37
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
76 TestFunctional/serial/CacheCmd/cache/add_local 1.93
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 37.91
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.57
87 TestFunctional/serial/LogsFileCmd 1.55
88 TestFunctional/serial/InvalidService 4.69
90 TestFunctional/parallel/ConfigCmd 0.43
91 TestFunctional/parallel/DashboardCmd 13.23
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.19
94 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/ServiceCmdConnect 12.71
99 TestFunctional/parallel/AddonsCmd 0.2
100 TestFunctional/parallel/PersistentVolumeClaim 50.51
102 TestFunctional/parallel/SSHCmd 0.51
103 TestFunctional/parallel/CpCmd 1.21
104 TestFunctional/parallel/MySQL 32.5
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.6
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
114 TestFunctional/parallel/License 0.22
115 TestFunctional/parallel/MountCmd/any-port 9.84
116 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
117 TestFunctional/parallel/MountCmd/specific-port 2.11
118 TestFunctional/parallel/ServiceCmd/List 0.4
119 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
123 TestFunctional/parallel/ServiceCmd/Format 0.44
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ServiceCmd/URL 0.62
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
127 TestFunctional/parallel/Version/short 0.06
128 TestFunctional/parallel/Version/components 1.02
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
133 TestFunctional/parallel/ImageCommands/ImageBuild 2.88
134 TestFunctional/parallel/ImageCommands/Setup 1.09
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.82
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.65
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.19
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.91
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.34
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.51
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestIngressAddonLegacy/StartLegacyK8sCluster 84.38
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.48
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
167 TestJSONOutput/start/Command 113.39
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.7
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.72
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.11
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 97.92
199 TestMountStart/serial/StartWithMountFirst 27.2
200 TestMountStart/serial/VerifyMountFirst 0.4
201 TestMountStart/serial/StartWithMountSecond 30.7
202 TestMountStart/serial/VerifyMountSecond 0.4
203 TestMountStart/serial/DeleteFirst 0.68
204 TestMountStart/serial/VerifyMountPostDelete 0.4
205 TestMountStart/serial/Stop 1.41
206 TestMountStart/serial/RestartStopped 23.03
207 TestMountStart/serial/VerifyMountPostStop 0.41
210 TestMultiNode/serial/FreshStart2Nodes 108.94
211 TestMultiNode/serial/DeployApp2Nodes 4.54
213 TestMultiNode/serial/AddNode 42.79
214 TestMultiNode/serial/MultiNodeLabels 0.06
215 TestMultiNode/serial/ProfileList 0.23
216 TestMultiNode/serial/CopyFile 7.8
217 TestMultiNode/serial/StopNode 3
218 TestMultiNode/serial/StartAfterStop 29.09
220 TestMultiNode/serial/DeleteNode 1.81
222 TestMultiNode/serial/RestartMultiNode 446.73
223 TestMultiNode/serial/ValidateNameConflict 48.12
230 TestScheduledStopUnix 118.47
236 TestKubernetesUpgrade 223.52
239 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
240 TestNoKubernetes/serial/StartWithK8s 110.21
241 TestStoppedBinaryUpgrade/Setup 0.45
243 TestNoKubernetes/serial/StartWithStopK8s 33.26
244 TestNoKubernetes/serial/Start 28.19
246 TestPause/serial/Start 89.27
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
248 TestNoKubernetes/serial/ProfileList 0.8
249 TestNoKubernetes/serial/Stop 1.36
250 TestNoKubernetes/serial/StartNoArgs 48.61
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.64
266 TestNetworkPlugins/group/false 3.58
272 TestStartStop/group/old-k8s-version/serial/FirstStart 184.93
274 TestStartStop/group/no-preload/serial/FirstStart 195.6
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.39
277 TestStartStop/group/embed-certs/serial/FirstStart 139.7
278 TestStartStop/group/embed-certs/serial/DeployApp 8.43
279 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
281 TestStartStop/group/old-k8s-version/serial/DeployApp 7.44
282 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
284 TestStartStop/group/no-preload/serial/DeployApp 9.97
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
288 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 65.78
290 TestStartStop/group/embed-certs/serial/SecondStart 684.81
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.5
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
295 TestStartStop/group/old-k8s-version/serial/SecondStart 704.73
297 TestStartStop/group/no-preload/serial/SecondStart 611.52
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 460.48
309 TestStartStop/group/newest-cni/serial/FirstStart 60.2
310 TestNetworkPlugins/group/auto/Start 125.8
311 TestNetworkPlugins/group/kindnet/Start 108.99
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.16
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
316 TestNetworkPlugins/group/auto/KubeletFlags 0.23
317 TestNetworkPlugins/group/auto/NetCatPod 11.41
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
319 TestNetworkPlugins/group/kindnet/NetCatPod 11.73
320 TestNetworkPlugins/group/auto/DNS 0.19
321 TestNetworkPlugins/group/auto/Localhost 0.15
322 TestNetworkPlugins/group/auto/HairPin 0.15
323 TestNetworkPlugins/group/kindnet/DNS 0.19
324 TestNetworkPlugins/group/kindnet/Localhost 0.15
325 TestNetworkPlugins/group/kindnet/HairPin 0.15
326 TestNetworkPlugins/group/calico/Start 95.06
327 TestNetworkPlugins/group/custom-flannel/Start 108.28
329 TestStartStop/group/newest-cni/serial/SecondStart 406.73
330 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
331 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.36
332 TestNetworkPlugins/group/enable-default-cni/Start 352.24
333 TestNetworkPlugins/group/calico/ControllerPod 5.03
334 TestNetworkPlugins/group/calico/KubeletFlags 0.23
335 TestNetworkPlugins/group/calico/NetCatPod 12.38
336 TestNetworkPlugins/group/calico/DNS 0.24
337 TestNetworkPlugins/group/calico/Localhost 0.14
338 TestNetworkPlugins/group/calico/HairPin 0.16
339 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
340 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
341 TestNetworkPlugins/group/custom-flannel/DNS 0.18
342 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
343 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
344 TestNetworkPlugins/group/flannel/Start 315.76
345 TestNetworkPlugins/group/bridge/Start 345.5
346 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
347 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.33
348 TestNetworkPlugins/group/flannel/ControllerPod 5.02
349 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
350 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
351 TestNetworkPlugins/group/flannel/NetCatPod 12.33
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
354 TestNetworkPlugins/group/flannel/DNS 0.25
355 TestNetworkPlugins/group/flannel/Localhost 0.23
356 TestNetworkPlugins/group/flannel/HairPin 0.19
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
360 TestStartStop/group/newest-cni/serial/Pause 3.48
361 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
362 TestNetworkPlugins/group/bridge/NetCatPod 11.34
363 TestNetworkPlugins/group/bridge/DNS 0.17
364 TestNetworkPlugins/group/bridge/Localhost 0.13
365 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (10.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-103789 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-103789 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.95014002s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-103789
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-103789: exit status 85 (74.22823ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-103789        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:34:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:34:47.287903   13422 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:34:47.288026   13422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:47.288030   13422 out.go:309] Setting ErrFile to fd 2...
	I1205 19:34:47.288035   13422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:47.288221   13422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	W1205 19:34:47.288332   13422 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-6237/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-6237/.minikube/config/config.json: no such file or directory
	I1205 19:34:47.288881   13422 out.go:303] Setting JSON to true
	I1205 19:34:47.289712   13422 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1040,"bootTime":1701803847,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:34:47.289770   13422 start.go:138] virtualization: kvm guest
	I1205 19:34:47.292391   13422 out.go:97] [download-only-103789] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:34:47.293892   13422 out.go:169] MINIKUBE_LOCATION=17731
	W1205 19:34:47.292527   13422 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 19:34:47.292593   13422 notify.go:220] Checking for updates...
	I1205 19:34:47.296855   13422 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:34:47.298341   13422 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:34:47.299669   13422 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:34:47.301302   13422 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:34:47.303947   13422 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:34:47.304196   13422 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:34:47.406198   13422 out.go:97] Using the kvm2 driver based on user configuration
	I1205 19:34:47.406230   13422 start.go:298] selected driver: kvm2
	I1205 19:34:47.406237   13422 start.go:902] validating driver "kvm2" against <nil>
	I1205 19:34:47.406593   13422 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:34:47.406733   13422 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:34:47.420680   13422 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 19:34:47.420737   13422 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1205 19:34:47.421223   13422 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 19:34:47.421382   13422 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 19:34:47.421451   13422 cni.go:84] Creating CNI manager for ""
	I1205 19:34:47.421467   13422 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:34:47.421479   13422 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 19:34:47.421488   13422 start_flags.go:323] config:
	{Name:download-only-103789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-103789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:34:47.421732   13422 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:34:47.423778   13422 out.go:97] Downloading VM boot image ...
	I1205 19:34:47.423813   13422 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/iso/amd64/minikube-v1.32.1-1701387192-17703-amd64.iso
	I1205 19:34:53.635333   13422 out.go:97] Starting control plane node download-only-103789 in cluster download-only-103789
	I1205 19:34:53.635364   13422 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:53.657925   13422 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:34:53.657961   13422 cache.go:56] Caching tarball of preloaded images
	I1205 19:34:53.658130   13422 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1205 19:34:53.660305   13422 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1205 19:34:53.660340   13422 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:53.695527   13422 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1205 19:34:56.772715   13422 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:56.772801   13422 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-103789"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-103789 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-103789 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.641423975s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-103789
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-103789: exit status 85 (72.293395ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-103789        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-103789        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:34:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:34:58.314380   13480 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:34:58.314590   13480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:58.314602   13480 out.go:309] Setting ErrFile to fd 2...
	I1205 19:34:58.314609   13480 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:34:58.314855   13480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	W1205 19:34:58.315012   13480 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-6237/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-6237/.minikube/config/config.json: no such file or directory
	I1205 19:34:58.315474   13480 out.go:303] Setting JSON to true
	I1205 19:34:58.316266   13480 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1051,"bootTime":1701803847,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:34:58.316322   13480 start.go:138] virtualization: kvm guest
	I1205 19:34:58.318846   13480 out.go:97] [download-only-103789] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:34:58.320509   13480 out.go:169] MINIKUBE_LOCATION=17731
	I1205 19:34:58.319038   13480 notify.go:220] Checking for updates...
	I1205 19:34:58.323661   13480 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:34:58.325188   13480 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:34:58.326904   13480 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:34:58.328337   13480 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:34:58.331002   13480 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:34:58.331500   13480 config.go:182] Loaded profile config "download-only-103789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1205 19:34:58.331558   13480 start.go:810] api.Load failed for download-only-103789: filestore "download-only-103789": Docker machine "download-only-103789" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:34:58.331689   13480 driver.go:392] Setting default libvirt URI to qemu:///system
	W1205 19:34:58.331745   13480 start.go:810] api.Load failed for download-only-103789: filestore "download-only-103789": Docker machine "download-only-103789" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:34:58.363188   13480 out.go:97] Using the kvm2 driver based on existing profile
	I1205 19:34:58.363239   13480 start.go:298] selected driver: kvm2
	I1205 19:34:58.363247   13480 start.go:902] validating driver "kvm2" against &{Name:download-only-103789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-103789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:34:58.363647   13480 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:34:58.363727   13480 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:34:58.378238   13480 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 19:34:58.379013   13480 cni.go:84] Creating CNI manager for ""
	I1205 19:34:58.379033   13480 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:34:58.379046   13480 start_flags.go:323] config:
	{Name:download-only-103789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-103789 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:34:58.379208   13480 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:34:58.381206   13480 out.go:97] Starting control plane node download-only-103789 in cluster download-only-103789
	I1205 19:34:58.381218   13480 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:34:58.403138   13480 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:34:58.403170   13480 cache.go:56] Caching tarball of preloaded images
	I1205 19:34:58.403320   13480 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1205 19:34:58.405418   13480 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1205 19:34:58.405432   13480 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:34:58.429176   13480 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:02.399094   13480 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:02.399207   13480 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-103789"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (9.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-103789 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-103789 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.016244155s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (9.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-103789
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-103789: exit status 85 (71.43491ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-103789           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:34 UTC |          |
	|         | -p download-only-103789           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-103789 | jenkins | v1.32.0 | 05 Dec 23 19:35 UTC |          |
	|         | -p download-only-103789           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/05 19:35:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 19:35:04.027802   13525 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:35:04.028095   13525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:04.028109   13525 out.go:309] Setting ErrFile to fd 2...
	I1205 19:35:04.028115   13525 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:35:04.028386   13525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	W1205 19:35:04.028507   13525 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17731-6237/.minikube/config/config.json: open /home/jenkins/minikube-integration/17731-6237/.minikube/config/config.json: no such file or directory
	I1205 19:35:04.028940   13525 out.go:303] Setting JSON to true
	I1205 19:35:04.029823   13525 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1057,"bootTime":1701803847,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:35:04.029909   13525 start.go:138] virtualization: kvm guest
	I1205 19:35:04.032166   13525 out.go:97] [download-only-103789] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:35:04.033938   13525 out.go:169] MINIKUBE_LOCATION=17731
	I1205 19:35:04.032309   13525 notify.go:220] Checking for updates...
	I1205 19:35:04.037460   13525 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:35:04.039152   13525 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:35:04.040879   13525 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:35:04.042420   13525 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 19:35:04.045433   13525 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 19:35:04.045865   13525 config.go:182] Loaded profile config "download-only-103789": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1205 19:35:04.045920   13525 start.go:810] api.Load failed for download-only-103789: filestore "download-only-103789": Docker machine "download-only-103789" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:04.046002   13525 driver.go:392] Setting default libvirt URI to qemu:///system
	W1205 19:35:04.046027   13525 start.go:810] api.Load failed for download-only-103789: filestore "download-only-103789": Docker machine "download-only-103789" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1205 19:35:04.079027   13525 out.go:97] Using the kvm2 driver based on existing profile
	I1205 19:35:04.079059   13525 start.go:298] selected driver: kvm2
	I1205 19:35:04.079066   13525 start.go:902] validating driver "kvm2" against &{Name:download-only-103789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-103789 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:04.079473   13525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:04.079541   13525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17731-6237/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 19:35:04.093998   13525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1205 19:35:04.094736   13525 cni.go:84] Creating CNI manager for ""
	I1205 19:35:04.094755   13525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 19:35:04.094764   13525 start_flags.go:323] config:
	{Name:download-only-103789 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-103789 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:35:04.094899   13525 iso.go:125] acquiring lock: {Name:mkf32314af88b41722cec1155a4daa6dd452cf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 19:35:04.096692   13525 out.go:97] Starting control plane node download-only-103789 in cluster download-only-103789
	I1205 19:35:04.096714   13525 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:04.118101   13525 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:04.118126   13525 cache.go:56] Caching tarball of preloaded images
	I1205 19:35:04.118309   13525 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:04.120228   13525 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1205 19:35:04.120252   13525 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:04.143490   13525 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:26a42be529125e55182ed93a618b213b -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1205 19:35:07.934393   13525 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:07.934486   13525 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17731-6237/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I1205 19:35:08.748470   13525 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on crio
	I1205 19:35:08.748637   13525 profile.go:148] Saving config to /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/download-only-103789/config.json ...
	I1205 19:35:08.748871   13525 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime crio
	I1205 19:35:08.749059   13525 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17731-6237/.minikube/cache/linux/amd64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-103789"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-103789
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-109311 --alsologtostderr --binary-mirror http://127.0.0.1:35295 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-109311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-109311
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (69.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-078065 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-078065 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.805794885s)
helpers_test.go:175: Cleaning up "offline-crio-078065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-078065
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-078065: (1.231413888s)
--- PASS: TestOffline (69.04s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-489440
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-489440: exit status 85 (61.478115ms)

                                                
                                                
-- stdout --
	* Profile "addons-489440" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-489440"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-489440
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-489440: exit status 85 (62.912212ms)

                                                
                                                
-- stdout --
	* Profile "addons-489440" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-489440"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (142.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-489440 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-489440 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.946518962s)
--- PASS: TestAddons/Setup (142.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 27.664729ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2nhwg" [1e708b27-168c-4eae-aebb-7d96da6c9f76] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.048505676s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wnn8h" [2f34e994-0f5a-4ee5-8faa-f0de5de7c04b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017829767s
addons_test.go:339: (dbg) Run:  kubectl --context addons-489440 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-489440 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-489440 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.892858968s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 ip
2023/12/05 19:37:51 [DEBUG] GET http://192.168.39.118:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 27.569159ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-msjks" [5361bdf5-6fee-48ec-8911-5271ae9055e5] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.04842993s
addons_test.go:414: (dbg) Run:  kubectl --context addons-489440 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.497637ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-l5vtg" [7e6cc3fe-6001-4c06-a49e-003585210abd] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012623762s
addons_test.go:472: (dbg) Run:  kubectl --context addons-489440 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-489440 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.249497369s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 30.343911ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-489440 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-489440 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8bd0bbf2-0f81-4fdb-9e71-38023d49fa54] Pending
helpers_test.go:344: "task-pv-pod" [8bd0bbf2-0f81-4fdb-9e71-38023d49fa54] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8bd0bbf2-0f81-4fdb-9e71-38023d49fa54] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.0311123s
addons_test.go:583: (dbg) Run:  kubectl --context addons-489440 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-489440 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-489440 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-489440 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-489440 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-489440 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-489440 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-489440 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b3af5f1f-c535-4bb5-bbb9-e7d34e8d2167] Pending
helpers_test.go:344: "task-pv-pod-restore" [b3af5f1f-c535-4bb5-bbb9-e7d34e8d2167] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b3af5f1f-c535-4bb5-bbb9-e7d34e8d2167] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.035113228s
addons_test.go:625: (dbg) Run:  kubectl --context addons-489440 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-489440 delete pod task-pv-pod-restore: (1.054703567s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-489440 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-489440 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-489440 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.862351574s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-489440 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-489440 --alsologtostderr -v=1: (2.353713976s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-p25zv" [9ec94079-0e4c-4256-8e7c-08a5876826ed] Pending
helpers_test.go:344: "headlamp-777fd4b855-p25zv" [9ec94079-0e4c-4256-8e7c-08a5876826ed] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-p25zv" [9ec94079-0e4c-4256-8e7c-08a5876826ed] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.050330434s
--- PASS: TestAddons/parallel/Headlamp (14.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-h46nt" [821abebe-227d-4b63-a057-9a08c535d119] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012232375s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-489440
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-489440 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-489440 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ad70519c-666c-4ab9-99c6-054db2e39246] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ad70519c-666c-4ab9-99c6-054db2e39246] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ad70519c-666c-4ab9-99c6-054db2e39246] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.011290407s
addons_test.go:890: (dbg) Run:  kubectl --context addons-489440 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 ssh "cat /opt/local-path-provisioner/pvc-fb2b2dea-9f18-4d7a-86cd-fd40e7f776f4_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-489440 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-489440 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-489440 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jw4c2" [2e516e12-3f41-47c1-a610-801efcb32379] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.016385807s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-489440
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-489440 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-489440 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (65.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-525564 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1205 20:40:16.960248   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-525564 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m4.267686379s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-525564 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-525564 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-525564 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-525564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-525564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-525564: (1.090627919s)
--- PASS: TestCertOptions (65.90s)

                                                
                                    
x
+
TestCertExpiration (300.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-873953 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-873953 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m40.8537s)
E1205 20:42:20.107650   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:42:37.060378   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:42:46.652220   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-873953 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E1205 20:45:16.960149   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-873953 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (18.318316732s)
helpers_test.go:175: Cleaning up "cert-expiration-873953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-873953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-873953: (1.016991646s)
--- PASS: TestCertExpiration (300.19s)

                                                
                                    
x
+
TestForceSystemdFlag (50.61s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-699600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-699600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.590006683s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-699600 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-699600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-699600
--- PASS: TestForceSystemdFlag (50.61s)

                                                
                                    
x
+
TestForceSystemdEnv (68.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-903631 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-903631 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.489266144s)
helpers_test.go:175: Cleaning up "force-systemd-env-903631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-903631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-903631: (1.005395779s)
--- PASS: TestForceSystemdEnv (68.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.37s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.37s)

                                                
                                    
x
+
TestErrorSpam/setup (45.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-247776 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-247776 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-247776 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-247776 --driver=kvm2  --container-runtime=crio: (45.328019685s)
--- PASS: TestErrorSpam/setup (45.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 stop: (2.097098959s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-247776 --log_dir /tmp/nospam-247776 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17731-6237/.minikube/files/etc/test/nested/copy/13410/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341707 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-341707 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.795578855s)
--- PASS: TestFunctional/serial/StartWithProxy (98.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341707 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-341707 --alsologtostderr -v=8: (38.3696991s)
functional_test.go:659: soft start took 38.370354155s for "functional-341707" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-341707 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 cache add registry.k8s.io/pause:3.3: (1.067370935s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 cache add registry.k8s.io/pause:latest: (1.053310794s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-341707 /tmp/TestFunctionalserialCacheCmdcacheadd_local3613599292/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cache add minikube-local-cache-test:functional-341707
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 cache add minikube-local-cache-test:functional-341707: (1.597693273s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cache delete minikube-local-cache-test:functional-341707
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-341707
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (241.501329ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 kubectl -- --context functional-341707 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-341707 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341707 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1205 19:52:37.060427   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.066335   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.076587   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.096851   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.137219   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.217685   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.378158   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:37.698812   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:52:38.339868   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-341707 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.912982127s)
functional_test.go:757: restart took 37.913078052s for "functional-341707" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-341707 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 logs
E1205 19:52:39.620258   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 logs: (1.570579634s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 logs --file /tmp/TestFunctionalserialLogsFileCmd1095213797/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 logs --file /tmp/TestFunctionalserialLogsFileCmd1095213797/001/logs.txt: (1.546451064s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-341707 apply -f testdata/invalidsvc.yaml
E1205 19:52:42.180888   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-341707
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-341707: exit status 115 (297.291688ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.158:32308 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-341707 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-341707 delete -f testdata/invalidsvc.yaml: (1.062162875s)
--- PASS: TestFunctional/serial/InvalidService (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 config get cpus: exit status 14 (69.890483ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 config get cpus: exit status 14 (68.420157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-341707 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-341707 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21696: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341707 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-341707 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.043407ms)

                                                
                                                
-- stdout --
	* [functional-341707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:53:01.105819   21225 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:53:01.106068   21225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:53:01.106077   21225 out.go:309] Setting ErrFile to fd 2...
	I1205 19:53:01.106082   21225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:53:01.106305   21225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 19:53:01.106864   21225 out.go:303] Setting JSON to false
	I1205 19:53:01.107903   21225 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2134,"bootTime":1701803847,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:53:01.107963   21225 start.go:138] virtualization: kvm guest
	I1205 19:53:01.109839   21225 out.go:177] * [functional-341707] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 19:53:01.111900   21225 notify.go:220] Checking for updates...
	I1205 19:53:01.111909   21225 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:53:01.113900   21225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:53:01.115322   21225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:53:01.116674   21225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:53:01.118128   21225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:53:01.119521   21225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:53:01.121259   21225 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:53:01.121683   21225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:53:01.121746   21225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:53:01.142579   21225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I1205 19:53:01.143020   21225 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:53:01.143586   21225 main.go:141] libmachine: Using API Version  1
	I1205 19:53:01.143611   21225 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:53:01.143955   21225 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:53:01.144130   21225 main.go:141] libmachine: (functional-341707) Calling .DriverName
	I1205 19:53:01.144380   21225 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:53:01.144666   21225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:53:01.144699   21225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:53:01.159929   21225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I1205 19:53:01.160312   21225 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:53:01.160964   21225 main.go:141] libmachine: Using API Version  1
	I1205 19:53:01.161012   21225 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:53:01.161419   21225 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:53:01.161604   21225 main.go:141] libmachine: (functional-341707) Calling .DriverName
	I1205 19:53:01.197137   21225 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 19:53:01.199083   21225 start.go:298] selected driver: kvm2
	I1205 19:53:01.199105   21225 start.go:902] validating driver "kvm2" against &{Name:functional-341707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-341707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.158 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:53:01.199243   21225 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:53:01.202009   21225 out.go:177] 
	W1205 19:53:01.203401   21225 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 19:53:01.205235   21225 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341707 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-341707 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-341707 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (187.052513ms)

                                                
                                                
-- stdout --
	* [functional-341707] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 19:53:00.856063   21148 out.go:296] Setting OutFile to fd 1 ...
	I1205 19:53:00.856190   21148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:53:00.856218   21148 out.go:309] Setting ErrFile to fd 2...
	I1205 19:53:00.856229   21148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 19:53:00.856513   21148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 19:53:00.857024   21148 out.go:303] Setting JSON to false
	I1205 19:53:00.857933   21148 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2134,"bootTime":1701803847,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 19:53:00.857991   21148 start.go:138] virtualization: kvm guest
	I1205 19:53:00.860054   21148 out.go:177] * [functional-341707] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 19:53:00.861650   21148 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 19:53:00.861668   21148 notify.go:220] Checking for updates...
	I1205 19:53:00.864630   21148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 19:53:00.866137   21148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 19:53:00.867566   21148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 19:53:00.869024   21148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 19:53:00.870394   21148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 19:53:00.872109   21148 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 19:53:00.872591   21148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:53:00.872637   21148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:53:00.893475   21148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40395
	I1205 19:53:00.893892   21148 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:53:00.894599   21148 main.go:141] libmachine: Using API Version  1
	I1205 19:53:00.894628   21148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:53:00.894997   21148 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:53:00.895195   21148 main.go:141] libmachine: (functional-341707) Calling .DriverName
	I1205 19:53:00.895504   21148 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 19:53:00.895949   21148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 19:53:00.896026   21148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 19:53:00.911541   21148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42699
	I1205 19:53:00.911935   21148 main.go:141] libmachine: () Calling .GetVersion
	I1205 19:53:00.912443   21148 main.go:141] libmachine: Using API Version  1
	I1205 19:53:00.912469   21148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 19:53:00.912862   21148 main.go:141] libmachine: () Calling .GetMachineName
	I1205 19:53:00.913080   21148 main.go:141] libmachine: (functional-341707) Calling .DriverName
	I1205 19:53:00.952121   21148 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1205 19:53:00.954166   21148 start.go:298] selected driver: kvm2
	I1205 19:53:00.954183   21148 start.go:902] validating driver "kvm2" against &{Name:functional-341707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17703/minikube-v1.32.1-1701387192-17703-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701387262-17703@sha256:a5458414df1be5e58eff93b3e67e6ecaad7e51ab23139de15714f7345af15e2f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-341707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.158 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1205 19:53:00.954347   21148 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 19:53:00.957129   21148 out.go:177] 
	W1205 19:53:00.958920   21148 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 19:53:00.960852   21148 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-341707 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-341707 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8v8k7" [3d9c8ac4-3cc7-4828-918a-6067aa03168d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8v8k7" [3d9c8ac4-3cc7-4828-918a-6067aa03168d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.013192214s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.158:30872
functional_test.go:1674: http://192.168.50.158:30872: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8v8k7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.158:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.158:30872
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [73d3cfce-7770-48bc-a974-64ed498b9702] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.026624901s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-341707 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-341707 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-341707 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-341707 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-341707 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1938546e-4d33-4dfa-b1b8-a701278fec40] Pending
helpers_test.go:344: "sp-pod" [1938546e-4d33-4dfa-b1b8-a701278fec40] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1938546e-4d33-4dfa-b1b8-a701278fec40] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.02469658s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-341707 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-341707 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-341707 delete -f testdata/storage-provisioner/pod.yaml: (1.568034704s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-341707 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [43e021d2-bcdd-4329-9abf-d1130812b7db] Pending
helpers_test.go:344: "sp-pod" [43e021d2-bcdd-4329-9abf-d1130812b7db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [43e021d2-bcdd-4329-9abf-d1130812b7db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.028328488s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-341707 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh -n functional-341707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 cp functional-341707:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1128626224/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh -n functional-341707 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-341707 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-d7vvl" [ce0df53b-4d4a-40d5-8750-28bd5ed50d38] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-d7vvl" [ce0df53b-4d4a-40d5-8750-28bd5ed50d38] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.02676194s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341707 exec mysql-859648c796-d7vvl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341707 exec mysql-859648c796-d7vvl -- mysql -ppassword -e "show databases;": exit status 1 (266.080699ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341707 exec mysql-859648c796-d7vvl -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341707 exec mysql-859648c796-d7vvl -- mysql -ppassword -e "show databases;": exit status 1 (166.622501ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341707 exec mysql-859648c796-d7vvl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13410/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /etc/test/nested/copy/13410/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13410.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /etc/ssl/certs/13410.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13410.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /usr/share/ca-certificates/13410.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/134102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /etc/ssl/certs/134102.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/134102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /usr/share/ca-certificates/134102.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-341707 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh "sudo systemctl is-active docker": exit status 1 (340.496215ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh "sudo systemctl is-active containerd": exit status 1 (314.414767ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdany-port2773870946/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701805966650048287" to /tmp/TestFunctionalparallelMountCmdany-port2773870946/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701805966650048287" to /tmp/TestFunctionalparallelMountCmdany-port2773870946/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701805966650048287" to /tmp/TestFunctionalparallelMountCmdany-port2773870946/001/test-1701805966650048287
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.683281ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 19:52 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 19:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 19:52 test-1701805966650048287
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh cat /mount-9p/test-1701805966650048287
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-341707 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9a7f2c0e-8616-4333-9b8d-934919a7a6c3] Pending
helpers_test.go:344: "busybox-mount" [9a7f2c0e-8616-4333-9b8d-934919a7a6c3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9a7f2c0e-8616-4333-9b8d-934919a7a6c3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9a7f2c0e-8616-4333-9b8d-934919a7a6c3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.03952683s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-341707 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdany-port2773870946/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-341707 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-341707 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-mdr5n" [0a973078-c7bd-41d6-a738-b711458f7ec3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1205 19:52:47.301668   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
helpers_test.go:344: "hello-node-d7447cc7f-mdr5n" [0a973078-c7bd-41d6-a738-b711458f7ec3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.021864615s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdspecific-port2036600159/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.608959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T /mount-9p | grep 9p"
E1205 19:52:57.542059   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdspecific-port2036600159/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh "sudo umount -f /mount-9p": exit status 1 (298.953432ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-341707 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdspecific-port2036600159/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2942506978/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2942506978/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2942506978/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T" /mount1: exit status 1 (349.242649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-341707 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2942506978/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2942506978/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-341707 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2942506978/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 service list -o json
functional_test.go:1493: Took "552.075744ms" to run "out/minikube-linux-amd64 -p functional-341707 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.158:32265
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "329.528951ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "74.709743ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.158:32265
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "383.408891ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "88.468221ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 version -o=json --components: (1.018353234s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341707 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-341707
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-341707
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341707 image ls --format short --alsologtostderr:
I1205 19:53:32.453541   22351 out.go:296] Setting OutFile to fd 1 ...
I1205 19:53:32.453686   22351 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.453698   22351 out.go:309] Setting ErrFile to fd 2...
I1205 19:53:32.453706   22351 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.453945   22351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
I1205 19:53:32.454646   22351 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.454764   22351 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.455275   22351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.455324   22351 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.469727   22351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36073
I1205 19:53:32.470163   22351 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.470681   22351 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.470706   22351 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.471086   22351 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.471248   22351 main.go:141] libmachine: (functional-341707) Calling .GetState
I1205 19:53:32.472974   22351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.473007   22351 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.486977   22351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
I1205 19:53:32.487304   22351 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.487673   22351 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.487686   22351 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.487946   22351 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.488096   22351 main.go:141] libmachine: (functional-341707) Calling .DriverName
I1205 19:53:32.488257   22351 ssh_runner.go:195] Run: systemctl --version
I1205 19:53:32.488275   22351 main.go:141] libmachine: (functional-341707) Calling .GetSSHHostname
I1205 19:53:32.491033   22351 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.491315   22351 main.go:141] libmachine: (functional-341707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:f8", ip: ""} in network mk-functional-341707: {Iface:virbr1 ExpiryTime:2023-12-05 20:49:52 +0000 UTC Type:0 Mac:52:54:00:e2:0b:f8 Iaid: IPaddr:192.168.50.158 Prefix:24 Hostname:functional-341707 Clientid:01:52:54:00:e2:0b:f8}
I1205 19:53:32.491344   22351 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined IP address 192.168.50.158 and MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.491526   22351 main.go:141] libmachine: (functional-341707) Calling .GetSSHPort
I1205 19:53:32.491679   22351 main.go:141] libmachine: (functional-341707) Calling .GetSSHKeyPath
I1205 19:53:32.491844   22351 main.go:141] libmachine: (functional-341707) Calling .GetSSHUsername
I1205 19:53:32.491969   22351 sshutil.go:53] new ssh client: &{IP:192.168.50.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/functional-341707/id_rsa Username:docker}
I1205 19:53:32.586021   22351 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:53:32.676828   22351 main.go:141] libmachine: Making call to close driver server
I1205 19:53:32.676844   22351 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:32.677163   22351 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:32.677180   22351 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:32.677199   22351 main.go:141] libmachine: Making call to close driver server
I1205 19:53:32.677209   22351 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:32.677426   22351 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:32.677449   22351 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:32.677475   22351 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341707 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-341707  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-341707  | 2235dcd56c1ff | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341707 image ls --format table --alsologtostderr:
I1205 19:53:33.023841   22463 out.go:296] Setting OutFile to fd 1 ...
I1205 19:53:33.023997   22463 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:33.024011   22463 out.go:309] Setting ErrFile to fd 2...
I1205 19:53:33.024018   22463 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:33.024269   22463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
I1205 19:53:33.024904   22463 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:33.025031   22463 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:33.025554   22463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:33.025613   22463 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:33.040446   22463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
I1205 19:53:33.040831   22463 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:33.041304   22463 main.go:141] libmachine: Using API Version  1
I1205 19:53:33.041333   22463 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:33.041648   22463 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:33.041858   22463 main.go:141] libmachine: (functional-341707) Calling .GetState
I1205 19:53:33.043494   22463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:33.043537   22463 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:33.056971   22463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
I1205 19:53:33.057432   22463 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:33.057845   22463 main.go:141] libmachine: Using API Version  1
I1205 19:53:33.057877   22463 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:33.058177   22463 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:33.058340   22463 main.go:141] libmachine: (functional-341707) Calling .DriverName
I1205 19:53:33.058522   22463 ssh_runner.go:195] Run: systemctl --version
I1205 19:53:33.058546   22463 main.go:141] libmachine: (functional-341707) Calling .GetSSHHostname
I1205 19:53:33.060865   22463 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:33.061253   22463 main.go:141] libmachine: (functional-341707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:f8", ip: ""} in network mk-functional-341707: {Iface:virbr1 ExpiryTime:2023-12-05 20:49:52 +0000 UTC Type:0 Mac:52:54:00:e2:0b:f8 Iaid: IPaddr:192.168.50.158 Prefix:24 Hostname:functional-341707 Clientid:01:52:54:00:e2:0b:f8}
I1205 19:53:33.061284   22463 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined IP address 192.168.50.158 and MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:33.061419   22463 main.go:141] libmachine: (functional-341707) Calling .GetSSHPort
I1205 19:53:33.061602   22463 main.go:141] libmachine: (functional-341707) Calling .GetSSHKeyPath
I1205 19:53:33.061763   22463 main.go:141] libmachine: (functional-341707) Calling .GetSSHUsername
I1205 19:53:33.061898   22463 sshutil.go:53] new ssh client: &{IP:192.168.50.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/functional-341707/id_rsa Username:docker}
I1205 19:53:33.194223   22463 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:53:33.304973   22463 main.go:141] libmachine: Making call to close driver server
I1205 19:53:33.304992   22463 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:33.305258   22463 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:33.305280   22463 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:33.305298   22463 main.go:141] libmachine: Making call to close driver server
I1205 19:53:33.305298   22463 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
I1205 19:53:33.305308   22463 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:33.305541   22463 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:33.305561   22463 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:33.305562   22463 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341707 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ec
c7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-c
ontroller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe
753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-341707"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"2235dcd56c1ff35274c9a6ceedb10603a7
43d8c62ddaa09658890837e6f37176","repoDigests":["localhost/minikube-local-cache-test@sha256:b0bfbb36e0b253c4be45f1dd3767268bc7ab64375606d99cf764a19f8cd580f2"],"repoTags":["localhost/minikube-local-cache-test:functional-341707"],"size":"3345"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoD
igests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"si
ze":"127226832"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341707 image ls --format json --alsologtostderr:
I1205 19:53:32.740928   22409 out.go:296] Setting OutFile to fd 1 ...
I1205 19:53:32.741059   22409 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.741067   22409 out.go:309] Setting ErrFile to fd 2...
I1205 19:53:32.741072   22409 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.741254   22409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
I1205 19:53:32.741814   22409 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.741910   22409 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.742260   22409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.742325   22409 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.756864   22409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
I1205 19:53:32.757357   22409 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.757885   22409 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.757909   22409 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.758188   22409 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.758397   22409 main.go:141] libmachine: (functional-341707) Calling .GetState
I1205 19:53:32.760610   22409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.760657   22409 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.774593   22409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43101
I1205 19:53:32.774907   22409 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.775301   22409 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.775315   22409 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.775561   22409 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.775686   22409 main.go:141] libmachine: (functional-341707) Calling .DriverName
I1205 19:53:32.775845   22409 ssh_runner.go:195] Run: systemctl --version
I1205 19:53:32.775864   22409 main.go:141] libmachine: (functional-341707) Calling .GetSSHHostname
I1205 19:53:32.778224   22409 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.778650   22409 main.go:141] libmachine: (functional-341707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:f8", ip: ""} in network mk-functional-341707: {Iface:virbr1 ExpiryTime:2023-12-05 20:49:52 +0000 UTC Type:0 Mac:52:54:00:e2:0b:f8 Iaid: IPaddr:192.168.50.158 Prefix:24 Hostname:functional-341707 Clientid:01:52:54:00:e2:0b:f8}
I1205 19:53:32.778686   22409 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined IP address 192.168.50.158 and MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.778821   22409 main.go:141] libmachine: (functional-341707) Calling .GetSSHPort
I1205 19:53:32.778948   22409 main.go:141] libmachine: (functional-341707) Calling .GetSSHKeyPath
I1205 19:53:32.779066   22409 main.go:141] libmachine: (functional-341707) Calling .GetSSHUsername
I1205 19:53:32.779215   22409 sshutil.go:53] new ssh client: &{IP:192.168.50.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/functional-341707/id_rsa Username:docker}
I1205 19:53:32.887230   22409 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:53:32.953526   22409 main.go:141] libmachine: Making call to close driver server
I1205 19:53:32.953543   22409 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:32.953831   22409 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
I1205 19:53:32.953843   22409 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:32.953862   22409 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:32.953878   22409 main.go:141] libmachine: Making call to close driver server
I1205 19:53:32.953888   22409 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:32.954103   22409 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:32.954119   22409 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341707 image ls --format yaml --alsologtostderr:
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-341707
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2235dcd56c1ff35274c9a6ceedb10603a743d8c62ddaa09658890837e6f37176
repoDigests:
- localhost/minikube-local-cache-test@sha256:b0bfbb36e0b253c4be45f1dd3767268bc7ab64375606d99cf764a19f8cd580f2
repoTags:
- localhost/minikube-local-cache-test:functional-341707
size: "3345"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341707 image ls --format yaml --alsologtostderr:
I1205 19:53:32.449207   22350 out.go:296] Setting OutFile to fd 1 ...
I1205 19:53:32.449345   22350 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.449355   22350 out.go:309] Setting ErrFile to fd 2...
I1205 19:53:32.449362   22350 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.449564   22350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
I1205 19:53:32.450180   22350 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.450423   22350 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.450916   22350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.450955   22350 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.464901   22350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
I1205 19:53:32.465349   22350 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.465994   22350 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.466021   22350 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.466393   22350 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.466632   22350 main.go:141] libmachine: (functional-341707) Calling .GetState
I1205 19:53:32.468670   22350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.468713   22350 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.483409   22350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
I1205 19:53:32.483883   22350 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.484397   22350 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.484424   22350 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.484775   22350 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.484956   22350 main.go:141] libmachine: (functional-341707) Calling .DriverName
I1205 19:53:32.485155   22350 ssh_runner.go:195] Run: systemctl --version
I1205 19:53:32.485176   22350 main.go:141] libmachine: (functional-341707) Calling .GetSSHHostname
I1205 19:53:32.488018   22350 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.488472   22350 main.go:141] libmachine: (functional-341707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:f8", ip: ""} in network mk-functional-341707: {Iface:virbr1 ExpiryTime:2023-12-05 20:49:52 +0000 UTC Type:0 Mac:52:54:00:e2:0b:f8 Iaid: IPaddr:192.168.50.158 Prefix:24 Hostname:functional-341707 Clientid:01:52:54:00:e2:0b:f8}
I1205 19:53:32.488505   22350 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined IP address 192.168.50.158 and MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.488655   22350 main.go:141] libmachine: (functional-341707) Calling .GetSSHPort
I1205 19:53:32.488821   22350 main.go:141] libmachine: (functional-341707) Calling .GetSSHKeyPath
I1205 19:53:32.489107   22350 main.go:141] libmachine: (functional-341707) Calling .GetSSHUsername
I1205 19:53:32.489332   22350 sshutil.go:53] new ssh client: &{IP:192.168.50.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/functional-341707/id_rsa Username:docker}
I1205 19:53:32.580218   22350 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 19:53:32.642018   22350 main.go:141] libmachine: Making call to close driver server
I1205 19:53:32.642034   22350 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:32.642296   22350 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:32.642317   22350 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:32.642329   22350 main.go:141] libmachine: Making call to close driver server
I1205 19:53:32.642331   22350 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
I1205 19:53:32.642338   22350 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:32.642630   22350 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
I1205 19:53:32.642647   22350 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:32.642664   22350 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-341707 ssh pgrep buildkitd: exit status 1 (242.672589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image build -t localhost/my-image:functional-341707 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image build -t localhost/my-image:functional-341707 testdata/build --alsologtostderr: (2.40339839s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-341707 image build -t localhost/my-image:functional-341707 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3fdb4cbee1f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-341707
--> daf6a3ae64e
Successfully tagged localhost/my-image:functional-341707
daf6a3ae64e8a3094d32e10c543bdd33cea21af78524fd58ea9aca527af3006a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-341707 image build -t localhost/my-image:functional-341707 testdata/build --alsologtostderr:
I1205 19:53:32.956858   22451 out.go:296] Setting OutFile to fd 1 ...
I1205 19:53:32.957055   22451 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.957064   22451 out.go:309] Setting ErrFile to fd 2...
I1205 19:53:32.957070   22451 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1205 19:53:32.957374   22451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
I1205 19:53:32.958205   22451 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.958919   22451 config.go:182] Loaded profile config "functional-341707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1205 19:53:32.959372   22451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.959462   22451 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.975197   22451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
I1205 19:53:32.975672   22451 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.976213   22451 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.976235   22451 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.976681   22451 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.976869   22451 main.go:141] libmachine: (functional-341707) Calling .GetState
I1205 19:53:32.979442   22451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 19:53:32.979485   22451 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 19:53:32.993599   22451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
I1205 19:53:32.993959   22451 main.go:141] libmachine: () Calling .GetVersion
I1205 19:53:32.994435   22451 main.go:141] libmachine: Using API Version  1
I1205 19:53:32.994463   22451 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 19:53:32.994748   22451 main.go:141] libmachine: () Calling .GetMachineName
I1205 19:53:32.994964   22451 main.go:141] libmachine: (functional-341707) Calling .DriverName
I1205 19:53:32.995178   22451 ssh_runner.go:195] Run: systemctl --version
I1205 19:53:32.995204   22451 main.go:141] libmachine: (functional-341707) Calling .GetSSHHostname
I1205 19:53:32.998477   22451 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.998942   22451 main.go:141] libmachine: (functional-341707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:0b:f8", ip: ""} in network mk-functional-341707: {Iface:virbr1 ExpiryTime:2023-12-05 20:49:52 +0000 UTC Type:0 Mac:52:54:00:e2:0b:f8 Iaid: IPaddr:192.168.50.158 Prefix:24 Hostname:functional-341707 Clientid:01:52:54:00:e2:0b:f8}
I1205 19:53:32.999013   22451 main.go:141] libmachine: (functional-341707) DBG | domain functional-341707 has defined IP address 192.168.50.158 and MAC address 52:54:00:e2:0b:f8 in network mk-functional-341707
I1205 19:53:32.999061   22451 main.go:141] libmachine: (functional-341707) Calling .GetSSHPort
I1205 19:53:32.999247   22451 main.go:141] libmachine: (functional-341707) Calling .GetSSHKeyPath
I1205 19:53:32.999395   22451 main.go:141] libmachine: (functional-341707) Calling .GetSSHUsername
I1205 19:53:32.999549   22451 sshutil.go:53] new ssh client: &{IP:192.168.50.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/functional-341707/id_rsa Username:docker}
I1205 19:53:33.101180   22451 build_images.go:151] Building image from path: /tmp/build.1443678989.tar
I1205 19:53:33.101242   22451 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 19:53:33.112540   22451 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1443678989.tar
I1205 19:53:33.118023   22451 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1443678989.tar: stat -c "%s %y" /var/lib/minikube/build/build.1443678989.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1443678989.tar': No such file or directory
I1205 19:53:33.118048   22451 ssh_runner.go:362] scp /tmp/build.1443678989.tar --> /var/lib/minikube/build/build.1443678989.tar (3072 bytes)
I1205 19:53:33.146267   22451 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1443678989
I1205 19:53:33.158193   22451 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1443678989 -xf /var/lib/minikube/build/build.1443678989.tar
I1205 19:53:33.168225   22451 crio.go:297] Building image: /var/lib/minikube/build/build.1443678989
I1205 19:53:33.168316   22451 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-341707 /var/lib/minikube/build/build.1443678989 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 19:53:35.250864   22451 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-341707 /var/lib/minikube/build/build.1443678989 --cgroup-manager=cgroupfs: (2.0825182s)
I1205 19:53:35.250926   22451 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1443678989
I1205 19:53:35.275541   22451 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1443678989.tar
I1205 19:53:35.290675   22451 build_images.go:207] Built localhost/my-image:functional-341707 from /tmp/build.1443678989.tar
I1205 19:53:35.290706   22451 build_images.go:123] succeeded building to: functional-341707
I1205 19:53:35.290713   22451 build_images.go:124] failed building to: 
I1205 19:53:35.290747   22451 main.go:141] libmachine: Making call to close driver server
I1205 19:53:35.290763   22451 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:35.291097   22451 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:35.291121   22451 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 19:53:35.291121   22451 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
I1205 19:53:35.291132   22451 main.go:141] libmachine: Making call to close driver server
I1205 19:53:35.291143   22451 main.go:141] libmachine: (functional-341707) Calling .Close
I1205 19:53:35.291371   22451 main.go:141] libmachine: (functional-341707) DBG | Closing plugin on server side
I1205 19:53:35.291411   22451 main.go:141] libmachine: Successfully made call to close driver server
I1205 19:53:35.291428   22451 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.072325882s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-341707
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 update-context --alsologtostderr -v=2
E1205 19:53:18.023050   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image load --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image load --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr: (4.574057697s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image load --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image load --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr: (5.284150923s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image save gcr.io/google-containers/addon-resizer:functional-341707 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image save gcr.io/google-containers/addon-resizer:functional-341707 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.193995487s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image rm gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.069084325s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-341707
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-341707 image save --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-341707 image save --daemon gcr.io/google-containers/addon-resizer:functional-341707 --alsologtostderr: (1.479672986s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-341707
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-341707
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-341707
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-341707
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-376951 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1205 19:53:58.983394   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-376951 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.378125725s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons enable ingress --alsologtostderr -v=5: (13.477183282s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-376951 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (113.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-133939 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1205 19:58:04.744996   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 19:58:07.135980   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:58:27.616504   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 19:59:08.577087   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-133939 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m53.385714484s)
--- PASS: TestJSONOutput/start/Command (113.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-133939 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-133939 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-133939 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-133939 --output=json --user=testUser: (7.106666557s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-400795 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-400795 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.514884ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"011e91d1-1da4-490c-878f-c712a13a6e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-400795] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d77407c7-23ff-48e0-b3d3-825a49c3bfd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17731"}}
	{"specversion":"1.0","id":"12b525ba-0dea-4407-9839-b543285e32ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"86095210-7d57-4b8e-9ec2-6dc87b5efbcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig"}}
	{"specversion":"1.0","id":"1627b18c-7b98-499a-87d8-c506c736f318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube"}}
	{"specversion":"1.0","id":"1f713240-d093-4556-a682-e9c9135eea04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"058e7a77-ea2b-4cd6-ba4f-a9329c43f69c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"15c74ce0-faba-46b4-852a-094a7faafcc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-400795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-400795
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-070053 --driver=kvm2  --container-runtime=crio
E1205 20:00:16.959881   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:16.965158   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:16.975438   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:16.995779   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:17.036055   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:17.116601   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:17.277227   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:17.597906   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:18.238997   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:19.519342   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:22.081163   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:27.202072   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:00:30.498122   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:00:37.442863   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-070053 --driver=kvm2  --container-runtime=crio: (48.095136504s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-072543 --driver=kvm2  --container-runtime=crio
E1205 20:00:57.923203   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:01:38.884306   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-072543 --driver=kvm2  --container-runtime=crio: (47.201372097s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-070053
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-072543
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-072543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-072543
helpers_test.go:175: Cleaning up "first-070053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-070053
--- PASS: TestMinikubeProfile (97.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-189139 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-189139 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.196633252s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-189139 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-189139 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-202745 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1205 20:02:37.060786   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-202745 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.696240588s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-202745 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-202745 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-189139 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-202745 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-202745 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-202745
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-202745: (1.410500812s)
--- PASS: TestMountStart/serial/Stop (1.41s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-202745
E1205 20:02:46.651857   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:03:00.806955   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-202745: (22.029319955s)
--- PASS: TestMountStart/serial/RestartStopped (23.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-202745 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-202745 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558947 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 20:03:14.338688   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558947 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.503739735s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-558947 -- rollout status deployment/busybox: (2.794197203s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-6www8 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-phsxm -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-6www8 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-phsxm -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-6www8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558947 -- exec busybox-5bc68d56bd-phsxm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.54s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-558947 -v 3 --alsologtostderr
E1205 20:05:16.959604   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:05:44.647467   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-558947 -v 3 --alsologtostderr: (42.197401974s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-558947 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp testdata/cp-test.txt multinode-558947:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile280168625/001/cp-test_multinode-558947.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947:/home/docker/cp-test.txt multinode-558947-m02:/home/docker/cp-test_multinode-558947_multinode-558947-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m02 "sudo cat /home/docker/cp-test_multinode-558947_multinode-558947-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947:/home/docker/cp-test.txt multinode-558947-m03:/home/docker/cp-test_multinode-558947_multinode-558947-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m03 "sudo cat /home/docker/cp-test_multinode-558947_multinode-558947-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp testdata/cp-test.txt multinode-558947-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile280168625/001/cp-test_multinode-558947-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947-m02:/home/docker/cp-test.txt multinode-558947:/home/docker/cp-test_multinode-558947-m02_multinode-558947.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947 "sudo cat /home/docker/cp-test_multinode-558947-m02_multinode-558947.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947-m02:/home/docker/cp-test.txt multinode-558947-m03:/home/docker/cp-test_multinode-558947-m02_multinode-558947-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m03 "sudo cat /home/docker/cp-test_multinode-558947-m02_multinode-558947-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp testdata/cp-test.txt multinode-558947-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile280168625/001/cp-test_multinode-558947-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947-m03:/home/docker/cp-test.txt multinode-558947:/home/docker/cp-test_multinode-558947-m03_multinode-558947.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947 "sudo cat /home/docker/cp-test_multinode-558947-m03_multinode-558947.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 cp multinode-558947-m03:/home/docker/cp-test.txt multinode-558947-m02:/home/docker/cp-test_multinode-558947-m03_multinode-558947-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 ssh -n multinode-558947-m02 "sudo cat /home/docker/cp-test_multinode-558947-m03_multinode-558947-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-558947 node stop m03: (2.09388206s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558947 status: exit status 7 (453.765974ms)

                                                
                                                
-- stdout --
	multinode-558947
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-558947-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-558947-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558947 status --alsologtostderr: exit status 7 (448.076358ms)

                                                
                                                
-- stdout --
	multinode-558947
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-558947-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-558947-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:06:00.774716   29414 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:06:00.774989   29414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:06:00.774999   29414 out.go:309] Setting ErrFile to fd 2...
	I1205 20:06:00.775004   29414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:06:00.775246   29414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:06:00.775464   29414 out.go:303] Setting JSON to false
	I1205 20:06:00.775499   29414 mustload.go:65] Loading cluster: multinode-558947
	I1205 20:06:00.775602   29414 notify.go:220] Checking for updates...
	I1205 20:06:00.776026   29414 config.go:182] Loaded profile config "multinode-558947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:06:00.776043   29414 status.go:255] checking status of multinode-558947 ...
	I1205 20:06:00.776490   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:00.776535   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:00.792635   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I1205 20:06:00.793058   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:00.793666   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:00.793694   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:00.794078   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:00.794302   29414 main.go:141] libmachine: (multinode-558947) Calling .GetState
	I1205 20:06:00.795857   29414 status.go:330] multinode-558947 host status = "Running" (err=<nil>)
	I1205 20:06:00.795885   29414 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:06:00.796211   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:00.796260   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:00.810464   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I1205 20:06:00.810855   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:00.811319   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:00.811344   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:00.811630   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:00.811793   29414 main.go:141] libmachine: (multinode-558947) Calling .GetIP
	I1205 20:06:00.814641   29414 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:06:00.815069   29414 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:06:00.815110   29414 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:06:00.815209   29414 host.go:66] Checking if "multinode-558947" exists ...
	I1205 20:06:00.815502   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:00.815543   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:00.829738   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I1205 20:06:00.830207   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:00.830665   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:00.830687   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:00.831036   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:00.831210   29414 main.go:141] libmachine: (multinode-558947) Calling .DriverName
	I1205 20:06:00.831408   29414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:06:00.831430   29414 main.go:141] libmachine: (multinode-558947) Calling .GetSSHHostname
	I1205 20:06:00.834347   29414 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:06:00.834783   29414 main.go:141] libmachine: (multinode-558947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:d0:61", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:03:26 +0000 UTC Type:0 Mac:52:54:00:ca:d0:61 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-558947 Clientid:01:52:54:00:ca:d0:61}
	I1205 20:06:00.834811   29414 main.go:141] libmachine: (multinode-558947) DBG | domain multinode-558947 has defined IP address 192.168.39.3 and MAC address 52:54:00:ca:d0:61 in network mk-multinode-558947
	I1205 20:06:00.834937   29414 main.go:141] libmachine: (multinode-558947) Calling .GetSSHPort
	I1205 20:06:00.835100   29414 main.go:141] libmachine: (multinode-558947) Calling .GetSSHKeyPath
	I1205 20:06:00.835278   29414 main.go:141] libmachine: (multinode-558947) Calling .GetSSHUsername
	I1205 20:06:00.835448   29414 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947/id_rsa Username:docker}
	I1205 20:06:00.930227   29414 ssh_runner.go:195] Run: systemctl --version
	I1205 20:06:00.935907   29414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:06:00.952772   29414 kubeconfig.go:92] found "multinode-558947" server: "https://192.168.39.3:8443"
	I1205 20:06:00.952799   29414 api_server.go:166] Checking apiserver status ...
	I1205 20:06:00.952828   29414 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:06:00.970966   29414 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1064/cgroup
	I1205 20:06:00.979477   29414 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod0a38ef6c4499d9729cedfe70dc9f6984/crio-73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f"
	I1205 20:06:00.979534   29414 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0a38ef6c4499d9729cedfe70dc9f6984/crio-73c0850b27ca05bf87326fddd5dd10447c570d8952536f9d30a52718ad6a365f/freezer.state
	I1205 20:06:00.988210   29414 api_server.go:204] freezer state: "THAWED"
	I1205 20:06:00.988240   29414 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I1205 20:06:00.994089   29414 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I1205 20:06:00.994116   29414 status.go:421] multinode-558947 apiserver status = Running (err=<nil>)
	I1205 20:06:00.994131   29414 status.go:257] multinode-558947 status: &{Name:multinode-558947 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:06:00.994150   29414 status.go:255] checking status of multinode-558947-m02 ...
	I1205 20:06:00.994511   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:00.994556   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:01.008650   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I1205 20:06:01.009053   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:01.009557   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:01.009580   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:01.009851   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:01.010025   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .GetState
	I1205 20:06:01.011408   29414 status.go:330] multinode-558947-m02 host status = "Running" (err=<nil>)
	I1205 20:06:01.011426   29414 host.go:66] Checking if "multinode-558947-m02" exists ...
	I1205 20:06:01.011694   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:01.011755   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:01.025991   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I1205 20:06:01.026389   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:01.026884   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:01.026906   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:01.027169   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:01.027320   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .GetIP
	I1205 20:06:01.030242   29414 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:06:01.030664   29414 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:06:01.030702   29414 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:06:01.030827   29414 host.go:66] Checking if "multinode-558947-m02" exists ...
	I1205 20:06:01.031185   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:01.031224   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:01.045206   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I1205 20:06:01.045583   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:01.046035   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:01.046053   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:01.046404   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:01.046636   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .DriverName
	I1205 20:06:01.046847   29414 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 20:06:01.046867   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHHostname
	I1205 20:06:01.049528   29414 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:06:01.049929   29414 main.go:141] libmachine: (multinode-558947-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:96:d8", ip: ""} in network mk-multinode-558947: {Iface:virbr1 ExpiryTime:2023-12-05 21:04:34 +0000 UTC Type:0 Mac:52:54:00:78:96:d8 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-558947-m02 Clientid:01:52:54:00:78:96:d8}
	I1205 20:06:01.050117   29414 main.go:141] libmachine: (multinode-558947-m02) DBG | domain multinode-558947-m02 has defined IP address 192.168.39.10 and MAC address 52:54:00:78:96:d8 in network mk-multinode-558947
	I1205 20:06:01.050378   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHPort
	I1205 20:06:01.050653   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHKeyPath
	I1205 20:06:01.050895   29414 main.go:141] libmachine: (multinode-558947-m02) Calling .GetSSHUsername
	I1205 20:06:01.051018   29414 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17731-6237/.minikube/machines/multinode-558947-m02/id_rsa Username:docker}
	I1205 20:06:01.137718   29414 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:06:01.150808   29414 status.go:257] multinode-558947-m02 status: &{Name:multinode-558947-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 20:06:01.150851   29414 status.go:255] checking status of multinode-558947-m03 ...
	I1205 20:06:01.151228   29414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:06:01.151276   29414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:06:01.166212   29414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
	I1205 20:06:01.166633   29414 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:06:01.167099   29414 main.go:141] libmachine: Using API Version  1
	I1205 20:06:01.167127   29414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:06:01.167448   29414 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:06:01.167629   29414 main.go:141] libmachine: (multinode-558947-m03) Calling .GetState
	I1205 20:06:01.169196   29414 status.go:330] multinode-558947-m03 host status = "Stopped" (err=<nil>)
	I1205 20:06:01.169212   29414 status.go:343] host is not running, skipping remaining checks
	I1205 20:06:01.169218   29414 status.go:257] multinode-558947-m03 status: &{Name:multinode-558947-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-558947 node start m03 --alsologtostderr: (28.435685428s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-558947 node delete m03: (1.255406202s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (446.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558947 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 20:22:37.059867   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:22:46.654065   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:25:16.959394   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:25:40.106684   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:27:37.060839   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:27:46.651329   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558947 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.172591033s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558947 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (446.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-558947
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558947-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-558947-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.058628ms)

                                                
                                                
-- stdout --
	* [multinode-558947-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-558947-m02' is duplicated with machine name 'multinode-558947-m02' in profile 'multinode-558947'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558947-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558947-m03 --driver=kvm2  --container-runtime=crio: (46.96918341s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-558947
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-558947: exit status 80 (241.25361ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-558947
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-558947-m03 already exists in multinode-558947-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-558947-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.12s)

                                                
                                    
x
+
TestScheduledStopUnix (118.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-353174 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-353174 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.725079334s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-353174 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-353174 -n scheduled-stop-353174
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-353174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-353174 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-353174 -n scheduled-stop-353174
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-353174
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-353174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1205 20:35:16.960016   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-353174
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-353174: exit status 7 (74.703134ms)

                                                
                                                
-- stdout --
	scheduled-stop-353174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-353174 -n scheduled-stop-353174
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-353174 -n scheduled-stop-353174: exit status 7 (73.739805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-353174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-353174
--- PASS: TestScheduledStopUnix (118.47s)

                                                
                                    
x
+
TestKubernetesUpgrade (223.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.776453861s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-128284
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-128284: (5.222180824s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-128284 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-128284 status --format={{.Host}}: exit status 7 (85.928123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.342206405s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-128284 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (113.52596ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-128284] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-128284
	    minikube start -p kubernetes-upgrade-128284 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1282842 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-128284 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-128284 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.765957005s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-128284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-128284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-128284: (1.13695784s)
--- PASS: TestKubernetesUpgrade (223.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-086358 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-086358 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (102.61862ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-086358] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (110.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-086358 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-086358 --driver=kvm2  --container-runtime=crio: (1m49.900111848s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-086358 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (110.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-086358 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1205 20:37:37.060448   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-086358 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.637162483s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-086358 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-086358 status -o json: exit status 2 (442.472223ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-086358","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-086358
E1205 20:37:46.651782   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-086358: (1.17562624s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-086358 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-086358 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.188246038s)
--- PASS: TestNoKubernetes/serial/Start (28.19s)

                                                
                                    
x
+
TestPause/serial/Start (89.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-405510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-405510 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m29.266257438s)
--- PASS: TestPause/serial/Start (89.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-086358 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-086358 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.042607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-086358
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-086358: (1.361996999s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (48.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-086358 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-086358 --driver=kvm2  --container-runtime=crio: (48.612568549s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (48.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-086358 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-086358 "sudo systemctl is-active --quiet service kubelet": exit status 1 (642.779944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-855101 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-855101 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (128.067728ms)

                                                
                                                
-- stdout --
	* [false-855101] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17731
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:39:11.016658   40782 out.go:296] Setting OutFile to fd 1 ...
	I1205 20:39:11.016793   40782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:11.016802   40782 out.go:309] Setting ErrFile to fd 2...
	I1205 20:39:11.016807   40782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1205 20:39:11.016996   40782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17731-6237/.minikube/bin
	I1205 20:39:11.017613   40782 out.go:303] Setting JSON to false
	I1205 20:39:11.018598   40782 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4904,"bootTime":1701803847,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:39:11.018666   40782 start.go:138] virtualization: kvm guest
	I1205 20:39:11.021671   40782 out.go:177] * [false-855101] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:39:11.023385   40782 out.go:177]   - MINIKUBE_LOCATION=17731
	I1205 20:39:11.023399   40782 notify.go:220] Checking for updates...
	I1205 20:39:11.024879   40782 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:39:11.026267   40782 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17731-6237/kubeconfig
	I1205 20:39:11.027668   40782 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17731-6237/.minikube
	I1205 20:39:11.029149   40782 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:39:11.030383   40782 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:39:11.032222   40782 config.go:182] Loaded profile config "force-systemd-flag-699600": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:39:11.032337   40782 config.go:182] Loaded profile config "pause-405510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1205 20:39:11.032389   40782 config.go:182] Loaded profile config "stopped-upgrade-601680": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1205 20:39:11.032468   40782 driver.go:392] Setting default libvirt URI to qemu:///system
	I1205 20:39:11.069446   40782 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:39:11.070766   40782 start.go:298] selected driver: kvm2
	I1205 20:39:11.070784   40782 start.go:902] validating driver "kvm2" against <nil>
	I1205 20:39:11.070798   40782 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:39:11.072907   40782 out.go:177] 
	W1205 20:39:11.074332   40782 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 20:39:11.075633   40782 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-855101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-855101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-855101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-855101"

                                                
                                                
----------------------- debugLogs end: false-855101 [took: 3.28539906s] --------------------------------
helpers_test.go:175: Cleaning up "false-855101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-855101
--- PASS: TestNetworkPlugins/group/false (3.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (184.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-061206 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-061206 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (3m4.927913864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (184.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (195.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-143651 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-143651 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (3m15.595738224s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (195.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-601680
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (139.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-331495 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-331495 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m19.70172217s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (139.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-331495 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d792e7e9-2949-425a-a198-a3b696020cfd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d792e7e9-2949-425a-a198-a3b696020cfd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.039839742s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-331495 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-331495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-331495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.17461877s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-331495 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061206 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ddf04adb-4490-485b-9f6d-56fedbf010fe] Pending
helpers_test.go:344: "busybox" [ddf04adb-4490-485b-9f6d-56fedbf010fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ddf04adb-4490-485b-9f6d-56fedbf010fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.03440727s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061206 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-061206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-061206 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143651 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d7a0d62-0997-4436-a863-dbc125049d5a] Pending
helpers_test.go:344: "busybox" [6d7a0d62-0997-4436-a863-dbc125049d5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6d7a0d62-0997-4436-a863-dbc125049d5a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.033934497s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143651 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-143651 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-143651 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008432401s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-143651 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-463614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-463614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m5.775578545s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (684.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-331495 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-331495 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m24.519210303s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-331495 -n embed-certs-331495
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (684.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [583b1351-dfeb-4b29-ad50-7e4c204c9931] Pending
helpers_test.go:344: "busybox" [583b1351-dfeb-4b29-ad50-7e4c204c9931] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [583b1351-dfeb-4b29-ad50-7e4c204c9931] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.037187899s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-463614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-463614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049423051s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-463614 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (704.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-061206 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-061206 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (11m44.442308956s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061206 -n old-k8s-version-061206
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (704.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (611.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-143651 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 20:47:29.701170   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:47:37.060182   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:47:46.652069   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-143651 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (10m11.20689847s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-143651 -n no-preload-143651
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (611.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (460.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-463614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1205 20:50:00.012107   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:50:16.960001   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 20:52:37.059974   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 20:52:46.651310   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 20:55:16.959521   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-463614 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (7m40.163831245s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (460.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-051721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-051721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (1m0.196365941s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (125.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m5.800616065s)
--- PASS: TestNetworkPlugins/group/auto/Start (125.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (108.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m48.9906212s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (108.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-051721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1555343s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dn9f2" [26f8da94-ff78-4130-b18e-167017b9dbb4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023333051s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sr99f" [75fdcfcf-2d6e-450e-9533-989164aa0e96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sr99f" [75fdcfcf-2d6e-450e-9533-989164aa0e96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.017743566s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tbnvg" [4745b053-473c-4e53-bc2d-77c8f8b8a2a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tbnvg" [4745b053-473c-4e53-bc2d-77c8f8b8a2a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.016466904s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.055175477s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (108.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1205 21:14:08.368405   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:08.373661   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:08.383955   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:08.404207   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:08.444486   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:08.524940   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:08.685385   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:09.005727   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:09.646702   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:14:10.927463   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m48.280455682s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (108.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (406.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-051721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1
E1205 21:14:43.388403   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:14:49.330395   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-051721 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.1: (6m46.414526732s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-051721 -n newest-cni-051721
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (406.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-463614 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-463614 --alsologtostderr -v=1
E1205 21:15:03.869484   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-463614 --alsologtostderr -v=1: (1.335543389s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614: exit status 2 (290.948376ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614: exit status 2 (296.163761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-463614 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463614 -n default-k8s-diff-port-463614
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (352.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1205 21:15:16.959398   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (5m52.238275124s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (352.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8rbwv" [c7b40384-9728-4652-807f-6515e2b62f17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.027895731s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z8qxs" [9510184b-c22f-4079-b1b0-0555d9fedf24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 21:15:30.291543   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z8qxs" [9510184b-c22f-4079-b1b0-0555d9fedf24] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.011477116s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-twwdp" [c79d239a-5da1-46c8-b5f9-d7b04d23120d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 21:15:40.109342   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-twwdp" [c79d239a-5da1-46c8-b5f9-d7b04d23120d] Running
E1205 21:15:44.830036   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.012093161s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (315.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (5m15.759006332s)
--- PASS: TestNetworkPlugins/group/flannel/Start (315.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (345.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1205 21:16:34.902539   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:34.907773   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:34.918078   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:34.938351   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:34.978624   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:35.058992   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:35.219465   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:35.540224   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:36.180502   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:37.460906   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:40.021583   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:45.142680   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:16:52.212059   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:16:55.383024   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:17:06.750654   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:17:15.863691   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:17:37.060785   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/addons-489440/client.crt: no such file or directory
E1205 21:17:46.651730   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 21:17:56.823940   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:18:15.696449   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:15.701785   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:15.712066   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:15.732353   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:15.772642   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:15.852997   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:16.013423   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:16.334004   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:16.974208   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:17.016450   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.021722   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.032053   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.052323   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.092699   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.173070   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.333585   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:17.654190   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:18.255159   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:18.294394   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:19.575305   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:20.815880   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:22.136299   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:25.936822   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:27.257344   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:36.177054   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:37.498487   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:18:56.657614   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:18:57.978850   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:19:08.368115   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:19:18.744801   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:19:22.906789   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:19:36.052440   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/old-k8s-version-061206/client.crt: no such file or directory
E1205 21:19:37.618107   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:19:38.939981   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:19:50.591692   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/no-preload-143651/client.crt: no such file or directory
E1205 21:20:16.959998   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/ingress-addon-legacy-376951/client.crt: no such file or directory
E1205 21:20:19.955131   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:19.960446   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:19.971635   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:19.991962   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:20.032318   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:20.112702   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:20.273147   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:20.593707   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:21.234419   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:22.515688   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:25.076946   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:30.197867   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:39.390588   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:39.395983   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:39.406306   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:39.426748   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:39.467081   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:39.547469   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:39.707901   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:40.028770   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:40.438393   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
E1205 21:20:40.669817   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:41.950402   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:44.511488   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:49.632395   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:20:49.702717   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/functional-341707/client.crt: no such file or directory
E1205 21:20:59.539235   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/kindnet-855101/client.crt: no such file or directory
E1205 21:20:59.873255   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-855101 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (5m45.497542948s)
--- PASS: TestNetworkPlugins/group/bridge/Start (345.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-phvcq" [32c1ee6a-a41d-4559-80fe-7b29c314b010] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 21:21:00.860146   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/auto-855101/client.crt: no such file or directory
E1205 21:21:00.918882   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-phvcq" [32c1ee6a-a41d-4559-80fe-7b29c314b010] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.01107616s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-96zrk" [11cd6a85-9077-407f-bce0-f0d5a67671c1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.022191876s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-76szj" [0429aec3-d0f6-410a-9da3-e4c4f93cce3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-76szj" [0429aec3-d0f6-410a-9da3-e4c4f93cce3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.014652971s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-051721 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-051721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051721 -n newest-cni-051721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051721 -n newest-cni-051721: exit status 2 (282.451426ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-051721 -n newest-cni-051721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-051721 -n newest-cni-051721: exit status 2 (271.614847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-051721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-051721 -n newest-cni-051721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-051721 -n newest-cni-051721
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.48s)
E1205 21:21:34.902013   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
E1205 21:21:41.879948   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/calico-855101/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-855101 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-855101 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mxpm5" [0b093086-0de0-4f00-be1d-5aca12850e2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mxpm5" [0b093086-0de0-4f00-be1d-5aca12850e2e] Running
E1205 21:22:01.314987   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/custom-flannel-855101/client.crt: no such file or directory
E1205 21:22:02.585905   13410 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/default-k8s-diff-port-463614/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.010945417s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-855101 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-855101 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/301)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.1/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.1/binaries 0
21 TestDownloadOnly/v1.29.0-rc.1/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
226 TestChangeNoneUser 0
229 TestScheduledStopWindows 0
231 TestSkaffold 0
233 TestInsufficientStorage 0
237 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.15
261 TestNetworkPlugins/group/kubenet 3.45
269 TestNetworkPlugins/group/cilium 4.08
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-255695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-255695
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-855101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-855101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-855101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-855101"

                                                
                                                
----------------------- debugLogs end: kubenet-855101 [took: 3.279034547s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-855101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-855101
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-855101 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-855101" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17731-6237/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Dec 2023 20:39:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.159:8443
name: pause-405510
contexts:
- context:
cluster: pause-405510
extensions:
- extension:
last-update: Tue, 05 Dec 2023 20:39:16 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-405510
name: pause-405510
current-context: pause-405510
kind: Config
preferences: {}
users:
- name: pause-405510
user:
client-certificate: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/client.crt
client-key: /home/jenkins/minikube-integration/17731-6237/.minikube/profiles/pause-405510/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-855101

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-855101" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-855101"

                                                
                                                
----------------------- debugLogs end: cilium-855101 [took: 3.885749993s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-855101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-855101
--- SKIP: TestNetworkPlugins/group/cilium (4.08s)

                                                
                                    
Copied to clipboard